00:00:00.001 Started by upstream project "autotest-per-patch" build number 130496 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.085 Fetching changes from the remote Git repository 00:00:00.093 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.118 Using shallow fetch with depth 1 00:00:00.118 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.118 > git --version # timeout=10 00:00:00.165 > git --version # 'git version 2.39.2' 00:00:00.165 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.105 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.120 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.133 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:03.133 > git config core.sparsecheckout # timeout=10 00:00:03.145 > git read-tree -mu HEAD # timeout=10 00:00:03.163 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:03.186 Commit message: "kid: add issue 3541" 00:00:03.186 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:03.284 [Pipeline] Start of Pipeline 00:00:03.300 [Pipeline] library 00:00:03.302 Loading library shm_lib@master 00:00:03.302 Library shm_lib@master is cached. Copying from home. 00:00:03.319 [Pipeline] node 00:00:18.321 Still waiting to schedule task 00:00:18.321 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:38.788 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest_2 00:01:38.790 [Pipeline] { 00:01:38.798 [Pipeline] catchError 00:01:38.799 [Pipeline] { 00:01:38.811 [Pipeline] wrap 00:01:38.818 [Pipeline] { 00:01:38.826 [Pipeline] stage 00:01:38.828 [Pipeline] { (Prologue) 00:01:38.844 [Pipeline] echo 00:01:38.846 Node: VM-host-WFP1 00:01:38.851 [Pipeline] cleanWs 00:01:38.860 [WS-CLEANUP] Deleting project workspace... 00:01:38.860 [WS-CLEANUP] Deferred wipeout is used... 00:01:38.865 [WS-CLEANUP] done 00:01:39.069 [Pipeline] setCustomBuildProperty 00:01:39.155 [Pipeline] httpRequest 00:01:39.561 [Pipeline] echo 00:01:39.563 Sorcerer 10.211.164.101 is alive 00:01:39.574 [Pipeline] retry 00:01:39.576 [Pipeline] { 00:01:39.591 [Pipeline] httpRequest 00:01:39.596 HttpMethod: GET 00:01:39.597 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:01:39.597 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:01:39.598 Response Code: HTTP/1.1 200 OK 00:01:39.599 Success: Status code 200 is in the accepted range: 200,404 00:01:39.599 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:01:39.745 [Pipeline] } 00:01:39.763 [Pipeline] // retry 00:01:39.771 [Pipeline] sh 00:01:40.053 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:01:40.069 [Pipeline] httpRequest 00:01:40.473 [Pipeline] echo 00:01:40.475 Sorcerer 10.211.164.101 is alive 00:01:40.485 [Pipeline] retry 00:01:40.487 [Pipeline] { 00:01:40.503 [Pipeline] httpRequest 00:01:40.507 HttpMethod: GET 00:01:40.508 URL: http://10.211.164.101/packages/spdk_a2e043c42de0eec2f670c9986b801be8a9c81d38.tar.gz 00:01:40.509 Sending request to url: http://10.211.164.101/packages/spdk_a2e043c42de0eec2f670c9986b801be8a9c81d38.tar.gz 00:01:40.510 Response Code: HTTP/1.1 200 OK 00:01:40.510 Success: Status code 200 is in the accepted range: 200,404 00:01:40.511 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_a2e043c42de0eec2f670c9986b801be8a9c81d38.tar.gz 00:01:42.736 [Pipeline] } 00:01:42.756 [Pipeline] // retry 00:01:42.764 [Pipeline] sh 00:01:43.049 + tar --no-same-owner -xf spdk_a2e043c42de0eec2f670c9986b801be8a9c81d38.tar.gz 00:01:45.612 [Pipeline] sh 00:01:45.894 + git -C spdk log --oneline -n5 00:01:45.894 a2e043c42 bdev/passthru: add bdev_io_stack support 00:01:45.894 0b6673e39 bdev: Add spdk_bdev_io_submit API 00:01:45.894 990fe4508 bdev: Add spdk_bdev_io_to_ctx 00:01:45.894 1463c4852 test/unit: remove unneeded MOCKs from ftl unit tests 00:01:45.894 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:45.913 [Pipeline] writeFile 00:01:45.928 [Pipeline] sh 00:01:46.211 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:46.228 [Pipeline] sh 00:01:46.568 + cat autorun-spdk.conf 00:01:46.568 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.568 SPDK_RUN_ASAN=1 00:01:46.568 SPDK_RUN_UBSAN=1 00:01:46.568 SPDK_TEST_RAID=1 00:01:46.568 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.574 RUN_NIGHTLY=0 00:01:46.575 [Pipeline] } 00:01:46.588 [Pipeline] // stage 00:01:46.617 [Pipeline] stage 00:01:46.619 [Pipeline] { (Run VM) 00:01:46.631 [Pipeline] sh 00:01:46.910 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.910 + echo 'Start stage prepare_nvme.sh' 00:01:46.910 Start stage prepare_nvme.sh 00:01:46.910 + [[ -n 1 ]] 00:01:46.910 + disk_prefix=ex1 00:01:46.910 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:01:46.910 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:01:46.910 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:01:46.910 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.910 ++ SPDK_RUN_ASAN=1 00:01:46.910 ++ SPDK_RUN_UBSAN=1 00:01:46.910 ++ SPDK_TEST_RAID=1 00:01:46.910 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.910 ++ RUN_NIGHTLY=0 00:01:46.910 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:01:46.910 + nvme_files=() 00:01:46.910 + declare -A nvme_files 00:01:46.910 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.910 + nvme_files['nvme.img']=5G 00:01:46.910 + nvme_files['nvme-cmb.img']=5G 00:01:46.910 + nvme_files['nvme-multi0.img']=4G 00:01:46.910 + nvme_files['nvme-multi1.img']=4G 00:01:46.910 + nvme_files['nvme-multi2.img']=4G 00:01:46.910 + nvme_files['nvme-openstack.img']=8G 00:01:46.910 + nvme_files['nvme-zns.img']=5G 00:01:46.910 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.910 + (( SPDK_TEST_FTL == 1 )) 00:01:46.910 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.910 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.910 + for nvme in "${!nvme_files[@]}" 00:01:46.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:46.910 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.910 + for nvme in "${!nvme_files[@]}" 00:01:46.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:46.910 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.910 + for nvme in "${!nvme_files[@]}" 00:01:46.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:46.910 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.910 + for nvme in "${!nvme_files[@]}" 00:01:46.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:47.847 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:47.847 + for nvme in "${!nvme_files[@]}" 00:01:47.847 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:47.847 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.847 + for nvme in "${!nvme_files[@]}" 00:01:47.847 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:47.847 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:47.847 + for nvme in "${!nvme_files[@]}" 00:01:47.847 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:48.414 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.672 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:48.672 + echo 'End stage prepare_nvme.sh' 00:01:48.672 End stage prepare_nvme.sh 00:01:48.684 [Pipeline] sh 00:01:48.964 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:48.964 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:48.964 00:01:48.964 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:01:48.964 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:01:48.964 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:01:48.964 HELP=0 00:01:48.964 DRY_RUN=0 00:01:48.964 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:48.964 NVME_DISKS_TYPE=nvme,nvme, 00:01:48.964 NVME_AUTO_CREATE=0 00:01:48.964 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:48.964 NVME_CMB=,, 00:01:48.964 NVME_PMR=,, 00:01:48.964 NVME_ZNS=,, 00:01:48.964 NVME_MS=,, 00:01:48.964 NVME_FDP=,, 00:01:48.964 SPDK_VAGRANT_DISTRO=fedora39 00:01:48.964 SPDK_VAGRANT_VMCPU=10 00:01:48.964 SPDK_VAGRANT_VMRAM=12288 00:01:48.964 SPDK_VAGRANT_PROVIDER=libvirt 00:01:48.964 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:48.964 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:48.964 SPDK_OPENSTACK_NETWORK=0 00:01:48.964 VAGRANT_PACKAGE_BOX=0 00:01:48.964 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:48.964 FORCE_DISTRO=true 00:01:48.964 VAGRANT_BOX_VERSION= 00:01:48.964 EXTRA_VAGRANTFILES= 00:01:48.964 NIC_MODEL=e1000 00:01:48.964 00:01:48.964 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:01:48.964 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:01:52.245 Bringing machine 'default' up with 'libvirt' provider... 00:01:53.178 ==> default: Creating image (snapshot of base box volume). 00:01:53.437 ==> default: Creating domain with the following settings... 00:01:53.437 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727475527_565718558f098420b0a4 00:01:53.437 ==> default: -- Domain type: kvm 00:01:53.437 ==> default: -- Cpus: 10 00:01:53.437 ==> default: -- Feature: acpi 00:01:53.437 ==> default: -- Feature: apic 00:01:53.437 ==> default: -- Feature: pae 00:01:53.437 ==> default: -- Memory: 12288M 00:01:53.437 ==> default: -- Memory Backing: hugepages: 00:01:53.437 ==> default: -- Management MAC: 00:01:53.437 ==> default: -- Loader: 00:01:53.437 ==> default: -- Nvram: 00:01:53.437 ==> default: -- Base box: spdk/fedora39 00:01:53.437 ==> default: -- Storage pool: default 00:01:53.438 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727475527_565718558f098420b0a4.img (20G) 00:01:53.438 ==> default: -- Volume Cache: default 00:01:53.438 ==> default: -- Kernel: 00:01:53.438 ==> default: -- Initrd: 00:01:53.438 ==> default: -- Graphics Type: vnc 00:01:53.438 ==> default: -- Graphics Port: -1 00:01:53.438 ==> default: -- Graphics IP: 127.0.0.1 00:01:53.438 ==> default: -- Graphics Password: Not defined 00:01:53.438 ==> default: -- Video Type: cirrus 00:01:53.438 ==> default: -- Video VRAM: 9216 00:01:53.438 ==> default: -- Sound Type: 00:01:53.438 ==> default: -- Keymap: en-us 00:01:53.438 ==> default: -- TPM Path: 00:01:53.438 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:53.438 ==> default: -- Command line args: 00:01:53.438 ==> default: -> value=-device, 00:01:53.438 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:53.438 ==> default: -> value=-drive, 00:01:53.438 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:53.438 ==> default: -> value=-device, 00:01:53.438 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.438 ==> default: -> value=-device, 00:01:53.438 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:53.438 ==> default: -> value=-drive, 00:01:53.438 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:53.438 ==> default: -> value=-device, 00:01:53.438 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.438 ==> default: -> value=-drive, 00:01:53.438 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:53.438 ==> default: -> value=-device, 00:01:53.438 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.438 ==> default: -> value=-drive, 00:01:53.438 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:53.438 ==> default: -> value=-device, 00:01:53.438 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.006 ==> default: Creating shared folders metadata... 00:01:54.006 ==> default: Starting domain. 00:01:55.911 ==> default: Waiting for domain to get an IP address... 00:02:14.015 ==> default: Waiting for SSH to become available... 00:02:14.015 ==> default: Configuring and enabling network interfaces... 00:02:18.211 default: SSH address: 192.168.121.193:22 00:02:18.211 default: SSH username: vagrant 00:02:18.211 default: SSH auth method: private key 00:02:20.757 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:28.876 ==> default: Mounting SSHFS shared folder... 00:02:31.409 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:31.409 ==> default: Checking Mount.. 00:02:32.790 ==> default: Folder Successfully Mounted! 00:02:32.790 ==> default: Running provisioner: file... 00:02:34.169 default: ~/.gitconfig => .gitconfig 00:02:34.429 00:02:34.429 SUCCESS! 00:02:34.429 00:02:34.429 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:34.429 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:34.429 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:34.429 00:02:34.438 [Pipeline] } 00:02:34.454 [Pipeline] // stage 00:02:34.464 [Pipeline] dir 00:02:34.464 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:02:34.466 [Pipeline] { 00:02:34.480 [Pipeline] catchError 00:02:34.482 [Pipeline] { 00:02:34.494 [Pipeline] sh 00:02:34.776 + vagrant ssh-config --host vagrant 00:02:34.776 + sed -ne /^Host/,$p 00:02:34.776 + tee ssh_conf 00:02:38.065 Host vagrant 00:02:38.065 HostName 192.168.121.193 00:02:38.065 User vagrant 00:02:38.065 Port 22 00:02:38.065 UserKnownHostsFile /dev/null 00:02:38.065 StrictHostKeyChecking no 00:02:38.065 PasswordAuthentication no 00:02:38.065 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:38.065 IdentitiesOnly yes 00:02:38.065 LogLevel FATAL 00:02:38.065 ForwardAgent yes 00:02:38.065 ForwardX11 yes 00:02:38.066 00:02:38.080 [Pipeline] withEnv 00:02:38.082 [Pipeline] { 00:02:38.097 [Pipeline] sh 00:02:38.379 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:38.379 source /etc/os-release 00:02:38.379 [[ -e /image.version ]] && img=$(< /image.version) 00:02:38.379 # Minimal, systemd-like check. 00:02:38.379 if [[ -e /.dockerenv ]]; then 00:02:38.379 # Clear garbage from the node's name: 00:02:38.379 # agt-er_autotest_547-896 -> autotest_547-896 00:02:38.379 # $HOSTNAME is the actual container id 00:02:38.379 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:38.379 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:38.379 # We can assume this is a mount from a host where container is running, 00:02:38.379 # so fetch its hostname to easily identify the target swarm worker. 00:02:38.379 container="$(< /etc/hostname) ($agent)" 00:02:38.379 else 00:02:38.379 # Fallback 00:02:38.379 container=$agent 00:02:38.379 fi 00:02:38.379 fi 00:02:38.379 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:38.379 00:02:38.652 [Pipeline] } 00:02:38.669 [Pipeline] // withEnv 00:02:38.677 [Pipeline] setCustomBuildProperty 00:02:38.693 [Pipeline] stage 00:02:38.696 [Pipeline] { (Tests) 00:02:38.714 [Pipeline] sh 00:02:39.016 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:39.290 [Pipeline] sh 00:02:39.596 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:39.873 [Pipeline] timeout 00:02:39.873 Timeout set to expire in 1 hr 30 min 00:02:39.875 [Pipeline] { 00:02:39.891 [Pipeline] sh 00:02:40.174 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:40.742 HEAD is now at a2e043c42 bdev/passthru: add bdev_io_stack support 00:02:40.755 [Pipeline] sh 00:02:41.036 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:41.311 [Pipeline] sh 00:02:41.596 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:41.872 [Pipeline] sh 00:02:42.155 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:42.441 ++ readlink -f spdk_repo 00:02:42.441 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:42.441 + [[ -n /home/vagrant/spdk_repo ]] 00:02:42.441 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:42.441 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:42.441 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:42.441 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:42.441 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:42.441 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:42.441 + cd /home/vagrant/spdk_repo 00:02:42.441 + source /etc/os-release 00:02:42.441 ++ NAME='Fedora Linux' 00:02:42.441 ++ VERSION='39 (Cloud Edition)' 00:02:42.441 ++ ID=fedora 00:02:42.441 ++ VERSION_ID=39 00:02:42.441 ++ VERSION_CODENAME= 00:02:42.441 ++ PLATFORM_ID=platform:f39 00:02:42.441 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:42.441 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:42.441 ++ LOGO=fedora-logo-icon 00:02:42.441 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:42.441 ++ HOME_URL=https://fedoraproject.org/ 00:02:42.441 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:42.441 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:42.441 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:42.441 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:42.441 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:42.441 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:42.441 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:42.441 ++ SUPPORT_END=2024-11-12 00:02:42.441 ++ VARIANT='Cloud Edition' 00:02:42.441 ++ VARIANT_ID=cloud 00:02:42.441 + uname -a 00:02:42.441 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:42.441 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:43.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:43.009 Hugepages 00:02:43.009 node hugesize free / total 00:02:43.009 node0 1048576kB 0 / 0 00:02:43.009 node0 2048kB 0 / 0 00:02:43.009 00:02:43.009 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.009 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:43.009 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:43.009 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:43.009 + rm -f /tmp/spdk-ld-path 00:02:43.009 + source autorun-spdk.conf 00:02:43.009 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.009 ++ SPDK_RUN_ASAN=1 00:02:43.009 ++ SPDK_RUN_UBSAN=1 00:02:43.009 ++ SPDK_TEST_RAID=1 00:02:43.009 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.009 ++ RUN_NIGHTLY=0 00:02:43.009 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.009 + [[ -n '' ]] 00:02:43.009 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:43.009 + for M in /var/spdk/build-*-manifest.txt 00:02:43.009 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:43.009 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.009 + for M in /var/spdk/build-*-manifest.txt 00:02:43.009 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.009 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.269 + for M in /var/spdk/build-*-manifest.txt 00:02:43.269 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.269 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.269 ++ uname 00:02:43.269 + [[ Linux == \L\i\n\u\x ]] 00:02:43.269 + sudo dmesg -T 00:02:43.269 + sudo dmesg --clear 00:02:43.269 + dmesg_pid=5210 00:02:43.269 + sudo dmesg -Tw 00:02:43.269 + [[ Fedora Linux == FreeBSD ]] 00:02:43.269 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.269 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.269 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.269 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.269 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.269 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.269 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.269 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.269 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.269 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.269 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.269 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.269 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.269 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.269 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.269 Test configuration: 00:02:43.269 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.269 SPDK_RUN_ASAN=1 00:02:43.269 SPDK_RUN_UBSAN=1 00:02:43.269 SPDK_TEST_RAID=1 00:02:43.269 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.269 RUN_NIGHTLY=0 22:19:39 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:43.269 22:19:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:43.269 22:19:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:43.269 22:19:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:43.269 22:19:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.269 22:19:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.269 22:19:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.269 22:19:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.269 22:19:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.269 22:19:39 -- paths/export.sh@5 -- $ export PATH 00:02:43.269 22:19:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.269 22:19:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:43.269 22:19:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:43.269 22:19:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727475579.XXXXXX 00:02:43.269 22:19:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727475579.Tv6FHn 00:02:43.269 22:19:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:43.269 22:19:39 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:43.269 22:19:39 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:43.269 22:19:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:43.269 22:19:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:43.269 22:19:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:43.269 22:19:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:43.269 22:19:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.529 22:19:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:43.529 22:19:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:43.529 22:19:39 -- pm/common@17 -- $ local monitor 00:02:43.529 22:19:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.529 22:19:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.529 22:19:39 -- pm/common@25 -- $ sleep 1 00:02:43.529 22:19:39 -- pm/common@21 -- $ date +%s 00:02:43.529 22:19:39 -- pm/common@21 -- $ date +%s 00:02:43.529 22:19:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727475579 00:02:43.529 22:19:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727475579 00:02:43.529 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727475579_collect-vmstat.pm.log 00:02:43.529 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727475579_collect-cpu-load.pm.log 00:02:44.469 22:19:40 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:44.469 22:19:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:44.469 22:19:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:44.469 22:19:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:44.469 22:19:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:44.469 Fri Sep 27 10:19:40 PM UTC 2024 00:02:44.469 22:19:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:44.469 v25.01-pre-19-ga2e043c42 00:02:44.469 22:19:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:44.469 22:19:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:44.469 22:19:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:44.469 22:19:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:44.469 22:19:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.469 ************************************ 00:02:44.469 START TEST asan 00:02:44.469 ************************************ 00:02:44.469 using asan 00:02:44.469 22:19:40 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:44.469 00:02:44.469 real 0m0.001s 00:02:44.469 user 0m0.000s 00:02:44.469 sys 0m0.000s 00:02:44.469 22:19:40 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:44.469 22:19:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.469 ************************************ 00:02:44.469 END TEST asan 00:02:44.469 ************************************ 00:02:44.469 22:19:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:44.469 22:19:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:44.469 22:19:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:44.469 22:19:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:44.469 22:19:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.469 ************************************ 00:02:44.469 START TEST ubsan 00:02:44.469 ************************************ 00:02:44.469 using ubsan 00:02:44.469 22:19:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:44.469 00:02:44.469 real 0m0.000s 00:02:44.469 user 0m0.000s 00:02:44.469 sys 0m0.000s 00:02:44.469 22:19:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:44.469 22:19:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.469 ************************************ 00:02:44.469 END TEST ubsan 00:02:44.469 ************************************ 00:02:44.469 22:19:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:44.469 22:19:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.469 22:19:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.469 22:19:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.469 22:19:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.469 22:19:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.469 22:19:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.469 22:19:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.469 22:19:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:44.729 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:44.729 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.297 Using 'verbs' RDMA provider 00:03:01.146 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:16.084 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:16.084 Creating mk/config.mk...done. 00:03:16.084 Creating mk/cc.flags.mk...done. 00:03:16.084 Type 'make' to build. 00:03:16.084 22:20:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:16.084 22:20:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:16.084 22:20:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:16.084 22:20:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.084 ************************************ 00:03:16.084 START TEST make 00:03:16.084 ************************************ 00:03:16.084 22:20:10 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:16.084 make[1]: Nothing to be done for 'all'. 00:03:28.292 The Meson build system 00:03:28.292 Version: 1.5.0 00:03:28.292 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:28.292 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:28.292 Build type: native build 00:03:28.292 Program cat found: YES (/usr/bin/cat) 00:03:28.292 Project name: DPDK 00:03:28.292 Project version: 24.03.0 00:03:28.292 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:28.292 C linker for the host machine: cc ld.bfd 2.40-14 00:03:28.292 Host machine cpu family: x86_64 00:03:28.292 Host machine cpu: x86_64 00:03:28.292 Message: ## Building in Developer Mode ## 00:03:28.292 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:28.292 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:28.292 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:28.292 Program python3 found: YES (/usr/bin/python3) 00:03:28.292 Program cat found: YES (/usr/bin/cat) 00:03:28.292 Compiler for C supports arguments -march=native: YES 00:03:28.292 Checking for size of "void *" : 8 00:03:28.292 Checking for size of "void *" : 8 (cached) 00:03:28.292 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:28.292 Library m found: YES 00:03:28.292 Library numa found: YES 00:03:28.292 Has header "numaif.h" : YES 00:03:28.292 Library fdt found: NO 00:03:28.292 Library execinfo found: NO 00:03:28.292 Has header "execinfo.h" : YES 00:03:28.292 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:28.292 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:28.292 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:28.292 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:28.292 Run-time dependency openssl found: YES 3.1.1 00:03:28.292 Run-time dependency libpcap found: YES 1.10.4 00:03:28.292 Has header "pcap.h" with dependency libpcap: YES 00:03:28.292 Compiler for C supports arguments -Wcast-qual: YES 00:03:28.292 Compiler for C supports arguments -Wdeprecated: YES 00:03:28.292 Compiler for C supports arguments -Wformat: YES 00:03:28.292 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:28.292 Compiler for C supports arguments -Wformat-security: NO 00:03:28.292 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:28.292 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:28.292 Compiler for C supports arguments -Wnested-externs: YES 00:03:28.292 Compiler for C supports arguments -Wold-style-definition: YES 00:03:28.292 Compiler for C supports arguments -Wpointer-arith: YES 00:03:28.292 Compiler for C supports arguments -Wsign-compare: YES 00:03:28.292 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:28.292 Compiler for C supports arguments -Wundef: YES 00:03:28.292 Compiler for C supports arguments -Wwrite-strings: YES 00:03:28.292 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:28.292 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:28.292 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:28.292 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:28.292 Program objdump found: YES (/usr/bin/objdump) 00:03:28.292 Compiler for C supports arguments -mavx512f: YES 00:03:28.292 Checking if "AVX512 checking" compiles: YES 00:03:28.292 Fetching value of define "__SSE4_2__" : 1 00:03:28.292 Fetching value of define "__AES__" : 1 00:03:28.292 Fetching value of define "__AVX__" : 1 00:03:28.292 Fetching value of define "__AVX2__" : 1 00:03:28.292 Fetching value of define "__AVX512BW__" : 1 00:03:28.292 Fetching value of define "__AVX512CD__" : 1 00:03:28.292 Fetching value of define "__AVX512DQ__" : 1 00:03:28.292 Fetching value of define "__AVX512F__" : 1 00:03:28.292 Fetching value of define "__AVX512VL__" : 1 00:03:28.292 Fetching value of define "__PCLMUL__" : 1 00:03:28.292 Fetching value of define "__RDRND__" : 1 00:03:28.292 Fetching value of define "__RDSEED__" : 1 00:03:28.292 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:28.292 Fetching value of define "__znver1__" : (undefined) 00:03:28.292 Fetching value of define "__znver2__" : (undefined) 00:03:28.292 Fetching value of define "__znver3__" : (undefined) 00:03:28.292 Fetching value of define "__znver4__" : (undefined) 00:03:28.292 Library asan found: YES 00:03:28.292 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:28.292 Message: lib/log: Defining dependency "log" 00:03:28.292 Message: lib/kvargs: Defining dependency "kvargs" 00:03:28.292 Message: lib/telemetry: Defining dependency "telemetry" 00:03:28.292 Library rt found: YES 00:03:28.292 Checking for function "getentropy" : NO 00:03:28.292 Message: lib/eal: Defining dependency "eal" 00:03:28.292 Message: lib/ring: Defining dependency "ring" 00:03:28.292 Message: lib/rcu: Defining dependency "rcu" 00:03:28.292 Message: lib/mempool: Defining dependency "mempool" 00:03:28.292 Message: lib/mbuf: Defining dependency "mbuf" 00:03:28.292 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:28.292 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:28.292 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:28.292 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:28.292 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:28.292 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:28.292 Compiler for C supports arguments -mpclmul: YES 00:03:28.292 Compiler for C supports arguments -maes: YES 00:03:28.292 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:28.292 Compiler for C supports arguments -mavx512bw: YES 00:03:28.292 Compiler for C supports arguments -mavx512dq: YES 00:03:28.292 Compiler for C supports arguments -mavx512vl: YES 00:03:28.292 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:28.292 Compiler for C supports arguments -mavx2: YES 00:03:28.292 Compiler for C supports arguments -mavx: YES 00:03:28.292 Message: lib/net: Defining dependency "net" 00:03:28.292 Message: lib/meter: Defining dependency "meter" 00:03:28.292 Message: lib/ethdev: Defining dependency "ethdev" 00:03:28.292 Message: lib/pci: Defining dependency "pci" 00:03:28.292 Message: lib/cmdline: Defining dependency "cmdline" 00:03:28.292 Message: lib/hash: Defining dependency "hash" 00:03:28.292 Message: lib/timer: Defining dependency "timer" 00:03:28.292 Message: lib/compressdev: Defining dependency "compressdev" 00:03:28.292 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:28.292 Message: lib/dmadev: Defining dependency "dmadev" 00:03:28.292 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:28.292 Message: lib/power: Defining dependency "power" 00:03:28.292 Message: lib/reorder: Defining dependency "reorder" 00:03:28.292 Message: lib/security: Defining dependency "security" 00:03:28.292 Has header "linux/userfaultfd.h" : YES 00:03:28.292 Has header "linux/vduse.h" : YES 00:03:28.292 Message: lib/vhost: Defining dependency "vhost" 00:03:28.292 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:28.292 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:28.292 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:28.292 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:28.292 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:28.292 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:28.292 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:28.292 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:28.292 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:28.292 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:28.292 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:28.292 Configuring doxy-api-html.conf using configuration 00:03:28.292 Configuring doxy-api-man.conf using configuration 00:03:28.292 Program mandb found: YES (/usr/bin/mandb) 00:03:28.292 Program sphinx-build found: NO 00:03:28.292 Configuring rte_build_config.h using configuration 00:03:28.292 Message: 00:03:28.292 ================= 00:03:28.292 Applications Enabled 00:03:28.292 ================= 00:03:28.292 00:03:28.292 apps: 00:03:28.292 00:03:28.292 00:03:28.292 Message: 00:03:28.292 ================= 00:03:28.292 Libraries Enabled 00:03:28.292 ================= 00:03:28.292 00:03:28.292 libs: 00:03:28.292 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:28.292 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:28.292 cryptodev, dmadev, power, reorder, security, vhost, 00:03:28.292 00:03:28.292 Message: 00:03:28.292 =============== 00:03:28.292 Drivers Enabled 00:03:28.292 =============== 00:03:28.292 00:03:28.292 common: 00:03:28.292 00:03:28.292 bus: 00:03:28.292 pci, vdev, 00:03:28.292 mempool: 00:03:28.292 ring, 00:03:28.292 dma: 00:03:28.292 00:03:28.292 net: 00:03:28.292 00:03:28.292 crypto: 00:03:28.292 00:03:28.292 compress: 00:03:28.292 00:03:28.292 vdpa: 00:03:28.292 00:03:28.292 00:03:28.292 Message: 00:03:28.292 ================= 00:03:28.292 Content Skipped 00:03:28.292 ================= 00:03:28.292 00:03:28.293 apps: 00:03:28.293 dumpcap: explicitly disabled via build config 00:03:28.293 graph: explicitly disabled via build config 00:03:28.293 pdump: explicitly disabled via build config 00:03:28.293 proc-info: explicitly disabled via build config 00:03:28.293 test-acl: explicitly disabled via build config 00:03:28.293 test-bbdev: explicitly disabled via build config 00:03:28.293 test-cmdline: explicitly disabled via build config 00:03:28.293 test-compress-perf: explicitly disabled via build config 00:03:28.293 test-crypto-perf: explicitly disabled via build config 00:03:28.293 test-dma-perf: explicitly disabled via build config 00:03:28.293 test-eventdev: explicitly disabled via build config 00:03:28.293 test-fib: explicitly disabled via build config 00:03:28.293 test-flow-perf: explicitly disabled via build config 00:03:28.293 test-gpudev: explicitly disabled via build config 00:03:28.293 test-mldev: explicitly disabled via build config 00:03:28.293 test-pipeline: explicitly disabled via build config 00:03:28.293 test-pmd: explicitly disabled via build config 00:03:28.293 test-regex: explicitly disabled via build config 00:03:28.293 test-sad: explicitly disabled via build config 00:03:28.293 test-security-perf: explicitly disabled via build config 00:03:28.293 00:03:28.293 libs: 00:03:28.293 argparse: explicitly disabled via build config 00:03:28.293 metrics: explicitly disabled via build config 00:03:28.293 acl: explicitly disabled via build config 00:03:28.293 bbdev: explicitly disabled via build config 00:03:28.293 bitratestats: explicitly disabled via build config 00:03:28.293 bpf: explicitly disabled via build config 00:03:28.293 cfgfile: explicitly disabled via build config 00:03:28.293 distributor: explicitly disabled via build config 00:03:28.293 efd: explicitly disabled via build config 00:03:28.293 eventdev: explicitly disabled via build config 00:03:28.293 dispatcher: explicitly disabled via build config 00:03:28.293 gpudev: explicitly disabled via build config 00:03:28.293 gro: explicitly disabled via build config 00:03:28.293 gso: explicitly disabled via build config 00:03:28.293 ip_frag: explicitly disabled via build config 00:03:28.293 jobstats: explicitly disabled via build config 00:03:28.293 latencystats: explicitly disabled via build config 00:03:28.293 lpm: explicitly disabled via build config 00:03:28.293 member: explicitly disabled via build config 00:03:28.293 pcapng: explicitly disabled via build config 00:03:28.293 rawdev: explicitly disabled via build config 00:03:28.293 regexdev: explicitly disabled via build config 00:03:28.293 mldev: explicitly disabled via build config 00:03:28.293 rib: explicitly disabled via build config 00:03:28.293 sched: explicitly disabled via build config 00:03:28.293 stack: explicitly disabled via build config 00:03:28.293 ipsec: explicitly disabled via build config 00:03:28.293 pdcp: explicitly disabled via build config 00:03:28.293 fib: explicitly disabled via build config 00:03:28.293 port: explicitly disabled via build config 00:03:28.293 pdump: explicitly disabled via build config 00:03:28.293 table: explicitly disabled via build config 00:03:28.293 pipeline: explicitly disabled via build config 00:03:28.293 graph: explicitly disabled via build config 00:03:28.293 node: explicitly disabled via build config 00:03:28.293 00:03:28.293 drivers: 00:03:28.293 common/cpt: not in enabled drivers build config 00:03:28.293 common/dpaax: not in enabled drivers build config 00:03:28.293 common/iavf: not in enabled drivers build config 00:03:28.293 common/idpf: not in enabled drivers build config 00:03:28.293 common/ionic: not in enabled drivers build config 00:03:28.293 common/mvep: not in enabled drivers build config 00:03:28.293 common/octeontx: not in enabled drivers build config 00:03:28.293 bus/auxiliary: not in enabled drivers build config 00:03:28.293 bus/cdx: not in enabled drivers build config 00:03:28.293 bus/dpaa: not in enabled drivers build config 00:03:28.293 bus/fslmc: not in enabled drivers build config 00:03:28.293 bus/ifpga: not in enabled drivers build config 00:03:28.293 bus/platform: not in enabled drivers build config 00:03:28.293 bus/uacce: not in enabled drivers build config 00:03:28.293 bus/vmbus: not in enabled drivers build config 00:03:28.293 common/cnxk: not in enabled drivers build config 00:03:28.293 common/mlx5: not in enabled drivers build config 00:03:28.293 common/nfp: not in enabled drivers build config 00:03:28.293 common/nitrox: not in enabled drivers build config 00:03:28.293 common/qat: not in enabled drivers build config 00:03:28.293 common/sfc_efx: not in enabled drivers build config 00:03:28.293 mempool/bucket: not in enabled drivers build config 00:03:28.293 mempool/cnxk: not in enabled drivers build config 00:03:28.293 mempool/dpaa: not in enabled drivers build config 00:03:28.293 mempool/dpaa2: not in enabled drivers build config 00:03:28.293 mempool/octeontx: not in enabled drivers build config 00:03:28.293 mempool/stack: not in enabled drivers build config 00:03:28.293 dma/cnxk: not in enabled drivers build config 00:03:28.293 dma/dpaa: not in enabled drivers build config 00:03:28.293 dma/dpaa2: not in enabled drivers build config 00:03:28.293 dma/hisilicon: not in enabled drivers build config 00:03:28.293 dma/idxd: not in enabled drivers build config 00:03:28.293 dma/ioat: not in enabled drivers build config 00:03:28.293 dma/skeleton: not in enabled drivers build config 00:03:28.293 net/af_packet: not in enabled drivers build config 00:03:28.293 net/af_xdp: not in enabled drivers build config 00:03:28.293 net/ark: not in enabled drivers build config 00:03:28.293 net/atlantic: not in enabled drivers build config 00:03:28.293 net/avp: not in enabled drivers build config 00:03:28.293 net/axgbe: not in enabled drivers build config 00:03:28.293 net/bnx2x: not in enabled drivers build config 00:03:28.293 net/bnxt: not in enabled drivers build config 00:03:28.293 net/bonding: not in enabled drivers build config 00:03:28.293 net/cnxk: not in enabled drivers build config 00:03:28.293 net/cpfl: not in enabled drivers build config 00:03:28.293 net/cxgbe: not in enabled drivers build config 00:03:28.293 net/dpaa: not in enabled drivers build config 00:03:28.293 net/dpaa2: not in enabled drivers build config 00:03:28.293 net/e1000: not in enabled drivers build config 00:03:28.293 net/ena: not in enabled drivers build config 00:03:28.293 net/enetc: not in enabled drivers build config 00:03:28.293 net/enetfec: not in enabled drivers build config 00:03:28.293 net/enic: not in enabled drivers build config 00:03:28.293 net/failsafe: not in enabled drivers build config 00:03:28.293 net/fm10k: not in enabled drivers build config 00:03:28.293 net/gve: not in enabled drivers build config 00:03:28.293 net/hinic: not in enabled drivers build config 00:03:28.293 net/hns3: not in enabled drivers build config 00:03:28.293 net/i40e: not in enabled drivers build config 00:03:28.293 net/iavf: not in enabled drivers build config 00:03:28.293 net/ice: not in enabled drivers build config 00:03:28.293 net/idpf: not in enabled drivers build config 00:03:28.293 net/igc: not in enabled drivers build config 00:03:28.293 net/ionic: not in enabled drivers build config 00:03:28.293 net/ipn3ke: not in enabled drivers build config 00:03:28.293 net/ixgbe: not in enabled drivers build config 00:03:28.293 net/mana: not in enabled drivers build config 00:03:28.293 net/memif: not in enabled drivers build config 00:03:28.293 net/mlx4: not in enabled drivers build config 00:03:28.293 net/mlx5: not in enabled drivers build config 00:03:28.293 net/mvneta: not in enabled drivers build config 00:03:28.293 net/mvpp2: not in enabled drivers build config 00:03:28.293 net/netvsc: not in enabled drivers build config 00:03:28.293 net/nfb: not in enabled drivers build config 00:03:28.293 net/nfp: not in enabled drivers build config 00:03:28.293 net/ngbe: not in enabled drivers build config 00:03:28.293 net/null: not in enabled drivers build config 00:03:28.293 net/octeontx: not in enabled drivers build config 00:03:28.293 net/octeon_ep: not in enabled drivers build config 00:03:28.293 net/pcap: not in enabled drivers build config 00:03:28.293 net/pfe: not in enabled drivers build config 00:03:28.293 net/qede: not in enabled drivers build config 00:03:28.293 net/ring: not in enabled drivers build config 00:03:28.293 net/sfc: not in enabled drivers build config 00:03:28.293 net/softnic: not in enabled drivers build config 00:03:28.293 net/tap: not in enabled drivers build config 00:03:28.293 net/thunderx: not in enabled drivers build config 00:03:28.293 net/txgbe: not in enabled drivers build config 00:03:28.293 net/vdev_netvsc: not in enabled drivers build config 00:03:28.293 net/vhost: not in enabled drivers build config 00:03:28.293 net/virtio: not in enabled drivers build config 00:03:28.293 net/vmxnet3: not in enabled drivers build config 00:03:28.293 raw/*: missing internal dependency, "rawdev" 00:03:28.293 crypto/armv8: not in enabled drivers build config 00:03:28.293 crypto/bcmfs: not in enabled drivers build config 00:03:28.293 crypto/caam_jr: not in enabled drivers build config 00:03:28.293 crypto/ccp: not in enabled drivers build config 00:03:28.293 crypto/cnxk: not in enabled drivers build config 00:03:28.293 crypto/dpaa_sec: not in enabled drivers build config 00:03:28.293 crypto/dpaa2_sec: not in enabled drivers build config 00:03:28.293 crypto/ipsec_mb: not in enabled drivers build config 00:03:28.293 crypto/mlx5: not in enabled drivers build config 00:03:28.293 crypto/mvsam: not in enabled drivers build config 00:03:28.293 crypto/nitrox: not in enabled drivers build config 00:03:28.293 crypto/null: not in enabled drivers build config 00:03:28.293 crypto/octeontx: not in enabled drivers build config 00:03:28.293 crypto/openssl: not in enabled drivers build config 00:03:28.293 crypto/scheduler: not in enabled drivers build config 00:03:28.293 crypto/uadk: not in enabled drivers build config 00:03:28.293 crypto/virtio: not in enabled drivers build config 00:03:28.293 compress/isal: not in enabled drivers build config 00:03:28.293 compress/mlx5: not in enabled drivers build config 00:03:28.293 compress/nitrox: not in enabled drivers build config 00:03:28.293 compress/octeontx: not in enabled drivers build config 00:03:28.293 compress/zlib: not in enabled drivers build config 00:03:28.293 regex/*: missing internal dependency, "regexdev" 00:03:28.293 ml/*: missing internal dependency, "mldev" 00:03:28.293 vdpa/ifc: not in enabled drivers build config 00:03:28.293 vdpa/mlx5: not in enabled drivers build config 00:03:28.293 vdpa/nfp: not in enabled drivers build config 00:03:28.293 vdpa/sfc: not in enabled drivers build config 00:03:28.293 event/*: missing internal dependency, "eventdev" 00:03:28.293 baseband/*: missing internal dependency, "bbdev" 00:03:28.293 gpu/*: missing internal dependency, "gpudev" 00:03:28.293 00:03:28.293 00:03:28.293 Build targets in project: 85 00:03:28.293 00:03:28.293 DPDK 24.03.0 00:03:28.293 00:03:28.293 User defined options 00:03:28.293 buildtype : debug 00:03:28.293 default_library : shared 00:03:28.293 libdir : lib 00:03:28.294 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:28.294 b_sanitize : address 00:03:28.294 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:28.294 c_link_args : 00:03:28.294 cpu_instruction_set: native 00:03:28.294 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:28.294 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:28.294 enable_docs : false 00:03:28.294 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:28.294 enable_kmods : false 00:03:28.294 max_lcores : 128 00:03:28.294 tests : false 00:03:28.294 00:03:28.294 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:28.294 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:28.294 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:28.294 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:28.294 [3/268] Linking static target lib/librte_kvargs.a 00:03:28.294 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:28.294 [5/268] Linking static target lib/librte_log.a 00:03:28.294 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:28.294 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:28.294 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:28.294 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:28.294 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:28.294 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:28.294 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:28.294 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.294 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:28.294 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:28.294 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:28.294 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:28.294 [18/268] Linking static target lib/librte_telemetry.a 00:03:28.294 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.294 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:28.294 [21/268] Linking target lib/librte_log.so.24.1 00:03:28.294 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:28.553 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:28.553 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:28.553 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:28.553 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:28.553 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:28.553 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:28.553 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:28.812 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:28.812 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:28.812 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:28.812 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.071 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:29.071 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:29.071 [36/268] Linking target lib/librte_telemetry.so.24.1 00:03:29.071 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:29.071 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:29.071 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:29.071 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:29.331 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:29.331 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:29.331 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:29.331 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:29.331 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:29.331 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:29.331 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:29.590 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:29.590 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:29.590 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.590 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:29.849 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:29.849 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:29.849 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:29.849 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:30.108 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:30.108 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:30.108 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:30.108 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:30.108 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:30.367 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:30.367 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:30.367 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:30.367 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:30.367 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:30.367 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:30.625 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:30.625 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:30.884 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:30.884 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:30.884 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:30.884 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:30.884 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:30.884 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:30.884 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:30.884 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:31.143 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:31.143 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:31.143 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:31.143 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:31.143 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:31.402 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:31.402 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:31.402 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:31.402 [85/268] Linking static target lib/librte_eal.a 00:03:31.402 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:31.682 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:31.682 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:31.682 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:31.682 [90/268] Linking static target lib/librte_mempool.a 00:03:31.682 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:31.682 [92/268] Linking static target lib/librte_ring.a 00:03:31.682 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:31.682 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:31.939 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:31.939 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:31.939 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:32.198 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:32.198 [99/268] Linking static target lib/librte_rcu.a 00:03:32.198 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.198 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:32.456 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:32.456 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:32.456 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:32.456 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:32.456 [106/268] Linking static target lib/librte_net.a 00:03:32.714 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.714 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:32.714 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:32.714 [110/268] Linking static target lib/librte_mbuf.a 00:03:32.714 [111/268] Linking static target lib/librte_meter.a 00:03:32.973 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:32.973 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:32.973 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.973 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.973 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:32.973 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.231 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:33.797 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:33.797 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:33.797 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:33.797 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:33.797 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.797 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:33.797 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:33.797 [126/268] Linking static target lib/librte_pci.a 00:03:34.054 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:34.054 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:34.312 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:34.312 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:34.312 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.312 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:34.313 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:34.571 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:34.571 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:34.571 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:34.571 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:34.571 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:34.571 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:34.571 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:34.571 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:34.571 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:34.571 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:34.571 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:34.571 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:34.571 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:34.828 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:34.828 [148/268] Linking static target lib/librte_cmdline.a 00:03:35.085 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:35.086 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:35.086 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:35.086 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:35.344 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:35.344 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:35.344 [155/268] Linking static target lib/librte_timer.a 00:03:35.602 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:35.602 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:35.602 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:35.863 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:35.863 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:35.863 [161/268] Linking static target lib/librte_ethdev.a 00:03:35.863 [162/268] Linking static target lib/librte_compressdev.a 00:03:35.863 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:35.863 [164/268] Linking static target lib/librte_hash.a 00:03:36.137 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:36.137 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:36.137 [167/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.137 [168/268] Linking static target lib/librte_dmadev.a 00:03:36.137 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:36.394 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:36.394 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:36.394 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.394 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:36.651 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:36.651 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.908 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:36.908 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:36.908 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:36.908 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:36.908 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:36.908 [181/268] Linking static target lib/librte_cryptodev.a 00:03:36.908 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.166 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.166 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:37.166 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:37.166 [186/268] Linking static target lib/librte_power.a 00:03:37.424 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:37.424 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:37.424 [189/268] Linking static target lib/librte_reorder.a 00:03:37.682 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:37.939 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:37.939 [192/268] Linking static target lib/librte_security.a 00:03:37.939 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:38.196 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.196 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:38.454 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.454 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:38.712 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.712 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:38.712 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:38.969 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:38.969 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:39.227 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:39.227 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:39.227 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:39.227 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:39.485 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:39.485 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:39.485 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.485 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:39.485 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:39.742 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:39.742 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:39.742 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.742 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:39.742 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:39.742 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.742 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:39.742 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:39.999 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:40.000 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:40.000 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:40.000 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.301 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.301 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.301 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:40.301 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.263 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:43.819 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.819 [230/268] Linking target lib/librte_eal.so.24.1 00:03:44.076 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.076 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.076 [233/268] Linking target lib/librte_ring.so.24.1 00:03:44.076 [234/268] Linking target lib/librte_pci.so.24.1 00:03:44.076 [235/268] Linking target lib/librte_meter.so.24.1 00:03:44.076 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:44.076 [237/268] Linking target lib/librte_timer.so.24.1 00:03:44.076 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.076 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.076 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.077 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:44.334 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.334 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:44.334 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.334 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:44.334 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.334 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:44.334 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:44.334 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:44.590 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:44.590 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:44.590 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:03:44.590 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:44.590 [254/268] Linking target lib/librte_net.so.24.1 00:03:44.847 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:44.847 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:44.847 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:44.847 [258/268] Linking target lib/librte_hash.so.24.1 00:03:44.847 [259/268] Linking target lib/librte_security.so.24.1 00:03:45.105 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.105 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:45.105 [262/268] Linking static target lib/librte_vhost.a 00:03:45.105 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.105 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:45.363 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.363 [266/268] Linking target lib/librte_power.so.24.1 00:03:47.901 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.901 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:47.901 INFO: autodetecting backend as ninja 00:03:47.901 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:02.782 CC lib/log/log.o 00:04:02.782 CC lib/log/log_deprecated.o 00:04:02.782 CC lib/log/log_flags.o 00:04:02.782 CC lib/ut/ut.o 00:04:02.782 CC lib/ut_mock/mock.o 00:04:02.782 LIB libspdk_ut_mock.a 00:04:02.782 LIB libspdk_ut.a 00:04:02.782 LIB libspdk_log.a 00:04:02.782 SO libspdk_ut_mock.so.6.0 00:04:02.782 SO libspdk_ut.so.2.0 00:04:02.782 SO libspdk_log.so.7.0 00:04:03.042 SYMLINK libspdk_ut_mock.so 00:04:03.042 SYMLINK libspdk_ut.so 00:04:03.042 SYMLINK libspdk_log.so 00:04:03.301 CXX lib/trace_parser/trace.o 00:04:03.301 CC lib/dma/dma.o 00:04:03.301 CC lib/util/base64.o 00:04:03.301 CC lib/util/bit_array.o 00:04:03.301 CC lib/util/crc16.o 00:04:03.301 CC lib/util/cpuset.o 00:04:03.301 CC lib/util/crc32.o 00:04:03.301 CC lib/util/crc32c.o 00:04:03.301 CC lib/ioat/ioat.o 00:04:03.559 CC lib/vfio_user/host/vfio_user_pci.o 00:04:03.559 CC lib/util/crc32_ieee.o 00:04:03.559 CC lib/util/crc64.o 00:04:03.559 CC lib/vfio_user/host/vfio_user.o 00:04:03.559 CC lib/util/dif.o 00:04:03.559 LIB libspdk_dma.a 00:04:03.559 CC lib/util/fd.o 00:04:03.559 CC lib/util/fd_group.o 00:04:03.559 SO libspdk_dma.so.5.0 00:04:03.559 CC lib/util/file.o 00:04:03.559 CC lib/util/hexlify.o 00:04:03.819 LIB libspdk_ioat.a 00:04:03.819 SYMLINK libspdk_dma.so 00:04:03.819 CC lib/util/iov.o 00:04:03.819 SO libspdk_ioat.so.7.0 00:04:03.819 CC lib/util/math.o 00:04:03.819 CC lib/util/net.o 00:04:03.819 LIB libspdk_vfio_user.a 00:04:03.819 CC lib/util/pipe.o 00:04:03.819 SYMLINK libspdk_ioat.so 00:04:03.819 CC lib/util/strerror_tls.o 00:04:03.819 CC lib/util/string.o 00:04:03.819 SO libspdk_vfio_user.so.5.0 00:04:03.819 CC lib/util/uuid.o 00:04:03.819 CC lib/util/xor.o 00:04:03.819 CC lib/util/zipf.o 00:04:03.819 CC lib/util/md5.o 00:04:04.079 SYMLINK libspdk_vfio_user.so 00:04:04.338 LIB libspdk_util.a 00:04:04.338 SO libspdk_util.so.10.0 00:04:04.595 LIB libspdk_trace_parser.a 00:04:04.595 SO libspdk_trace_parser.so.6.0 00:04:04.595 SYMLINK libspdk_util.so 00:04:04.595 SYMLINK libspdk_trace_parser.so 00:04:04.853 CC lib/rdma_provider/common.o 00:04:04.853 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:04.853 CC lib/vmd/vmd.o 00:04:04.853 CC lib/vmd/led.o 00:04:04.853 CC lib/json/json_parse.o 00:04:04.853 CC lib/json/json_util.o 00:04:04.853 CC lib/env_dpdk/env.o 00:04:04.853 CC lib/idxd/idxd.o 00:04:04.853 CC lib/rdma_utils/rdma_utils.o 00:04:04.853 CC lib/conf/conf.o 00:04:05.111 CC lib/env_dpdk/memory.o 00:04:05.111 LIB libspdk_rdma_provider.a 00:04:05.111 CC lib/env_dpdk/pci.o 00:04:05.111 SO libspdk_rdma_provider.so.6.0 00:04:05.111 LIB libspdk_conf.a 00:04:05.111 CC lib/json/json_write.o 00:04:05.111 CC lib/idxd/idxd_user.o 00:04:05.111 SO libspdk_conf.so.6.0 00:04:05.111 LIB libspdk_rdma_utils.a 00:04:05.111 SYMLINK libspdk_rdma_provider.so 00:04:05.111 SO libspdk_rdma_utils.so.1.0 00:04:05.111 CC lib/idxd/idxd_kernel.o 00:04:05.111 SYMLINK libspdk_conf.so 00:04:05.111 CC lib/env_dpdk/init.o 00:04:05.111 SYMLINK libspdk_rdma_utils.so 00:04:05.111 CC lib/env_dpdk/threads.o 00:04:05.370 CC lib/env_dpdk/pci_ioat.o 00:04:05.370 CC lib/env_dpdk/pci_virtio.o 00:04:05.370 CC lib/env_dpdk/pci_vmd.o 00:04:05.370 CC lib/env_dpdk/pci_idxd.o 00:04:05.370 LIB libspdk_json.a 00:04:05.370 CC lib/env_dpdk/pci_event.o 00:04:05.370 SO libspdk_json.so.6.0 00:04:05.370 CC lib/env_dpdk/sigbus_handler.o 00:04:05.627 CC lib/env_dpdk/pci_dpdk.o 00:04:05.627 SYMLINK libspdk_json.so 00:04:05.627 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:05.627 LIB libspdk_idxd.a 00:04:05.627 LIB libspdk_vmd.a 00:04:05.627 SO libspdk_idxd.so.12.1 00:04:05.627 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:05.627 SO libspdk_vmd.so.6.0 00:04:05.627 SYMLINK libspdk_idxd.so 00:04:05.627 SYMLINK libspdk_vmd.so 00:04:05.886 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:05.886 CC lib/jsonrpc/jsonrpc_server.o 00:04:05.886 CC lib/jsonrpc/jsonrpc_client.o 00:04:05.886 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:06.144 LIB libspdk_jsonrpc.a 00:04:06.144 SO libspdk_jsonrpc.so.6.0 00:04:06.402 SYMLINK libspdk_jsonrpc.so 00:04:06.659 LIB libspdk_env_dpdk.a 00:04:06.659 SO libspdk_env_dpdk.so.15.0 00:04:06.659 CC lib/rpc/rpc.o 00:04:06.916 SYMLINK libspdk_env_dpdk.so 00:04:06.916 LIB libspdk_rpc.a 00:04:06.916 SO libspdk_rpc.so.6.0 00:04:07.174 SYMLINK libspdk_rpc.so 00:04:07.432 CC lib/notify/notify.o 00:04:07.432 CC lib/notify/notify_rpc.o 00:04:07.432 CC lib/trace/trace_flags.o 00:04:07.432 CC lib/trace/trace.o 00:04:07.432 CC lib/trace/trace_rpc.o 00:04:07.432 CC lib/keyring/keyring.o 00:04:07.432 CC lib/keyring/keyring_rpc.o 00:04:07.690 LIB libspdk_notify.a 00:04:07.690 SO libspdk_notify.so.6.0 00:04:07.690 LIB libspdk_keyring.a 00:04:07.690 SYMLINK libspdk_notify.so 00:04:07.690 LIB libspdk_trace.a 00:04:07.690 SO libspdk_trace.so.11.0 00:04:07.690 SO libspdk_keyring.so.2.0 00:04:07.947 SYMLINK libspdk_trace.so 00:04:07.947 SYMLINK libspdk_keyring.so 00:04:08.205 CC lib/sock/sock.o 00:04:08.205 CC lib/thread/thread.o 00:04:08.205 CC lib/sock/sock_rpc.o 00:04:08.205 CC lib/thread/iobuf.o 00:04:08.772 LIB libspdk_sock.a 00:04:08.772 SO libspdk_sock.so.10.0 00:04:08.772 SYMLINK libspdk_sock.so 00:04:09.338 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:09.338 CC lib/nvme/nvme_ctrlr.o 00:04:09.338 CC lib/nvme/nvme_fabric.o 00:04:09.338 CC lib/nvme/nvme_ns_cmd.o 00:04:09.338 CC lib/nvme/nvme_ns.o 00:04:09.338 CC lib/nvme/nvme_pcie_common.o 00:04:09.338 CC lib/nvme/nvme_pcie.o 00:04:09.338 CC lib/nvme/nvme_qpair.o 00:04:09.338 CC lib/nvme/nvme.o 00:04:10.272 CC lib/nvme/nvme_quirks.o 00:04:10.272 CC lib/nvme/nvme_transport.o 00:04:10.272 LIB libspdk_thread.a 00:04:10.272 SO libspdk_thread.so.10.1 00:04:10.272 CC lib/nvme/nvme_discovery.o 00:04:10.272 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:10.272 SYMLINK libspdk_thread.so 00:04:10.272 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:10.272 CC lib/nvme/nvme_tcp.o 00:04:10.530 CC lib/accel/accel.o 00:04:10.530 CC lib/blob/blobstore.o 00:04:10.530 CC lib/blob/request.o 00:04:10.788 CC lib/blob/zeroes.o 00:04:10.788 CC lib/blob/blob_bs_dev.o 00:04:10.788 CC lib/init/json_config.o 00:04:10.788 CC lib/init/subsystem.o 00:04:10.788 CC lib/init/subsystem_rpc.o 00:04:10.788 CC lib/init/rpc.o 00:04:10.788 CC lib/accel/accel_rpc.o 00:04:11.048 CC lib/accel/accel_sw.o 00:04:11.048 CC lib/nvme/nvme_opal.o 00:04:11.048 CC lib/nvme/nvme_io_msg.o 00:04:11.048 CC lib/nvme/nvme_poll_group.o 00:04:11.048 LIB libspdk_init.a 00:04:11.048 SO libspdk_init.so.6.0 00:04:11.312 SYMLINK libspdk_init.so 00:04:11.312 CC lib/virtio/virtio.o 00:04:11.312 CC lib/fsdev/fsdev.o 00:04:11.569 CC lib/event/app.o 00:04:11.569 CC lib/event/reactor.o 00:04:11.569 CC lib/virtio/virtio_vhost_user.o 00:04:11.569 CC lib/virtio/virtio_vfio_user.o 00:04:11.828 CC lib/virtio/virtio_pci.o 00:04:11.828 LIB libspdk_accel.a 00:04:11.828 SO libspdk_accel.so.16.0 00:04:11.828 CC lib/fsdev/fsdev_io.o 00:04:11.828 SYMLINK libspdk_accel.so 00:04:11.828 CC lib/fsdev/fsdev_rpc.o 00:04:12.086 CC lib/event/log_rpc.o 00:04:12.086 LIB libspdk_virtio.a 00:04:12.086 CC lib/event/app_rpc.o 00:04:12.086 CC lib/nvme/nvme_zns.o 00:04:12.086 CC lib/nvme/nvme_stubs.o 00:04:12.086 SO libspdk_virtio.so.7.0 00:04:12.086 CC lib/nvme/nvme_auth.o 00:04:12.086 CC lib/bdev/bdev.o 00:04:12.086 CC lib/bdev/bdev_rpc.o 00:04:12.086 SYMLINK libspdk_virtio.so 00:04:12.344 CC lib/bdev/bdev_zone.o 00:04:12.344 CC lib/event/scheduler_static.o 00:04:12.344 CC lib/nvme/nvme_cuse.o 00:04:12.344 LIB libspdk_event.a 00:04:12.344 LIB libspdk_fsdev.a 00:04:12.344 CC lib/bdev/part.o 00:04:12.344 SO libspdk_event.so.14.0 00:04:12.603 SO libspdk_fsdev.so.1.0 00:04:12.603 SYMLINK libspdk_event.so 00:04:12.603 CC lib/nvme/nvme_rdma.o 00:04:12.603 CC lib/bdev/scsi_nvme.o 00:04:12.603 SYMLINK libspdk_fsdev.so 00:04:12.862 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:13.797 LIB libspdk_fuse_dispatcher.a 00:04:13.797 SO libspdk_fuse_dispatcher.so.1.0 00:04:13.797 SYMLINK libspdk_fuse_dispatcher.so 00:04:14.404 LIB libspdk_nvme.a 00:04:14.404 LIB libspdk_blob.a 00:04:14.404 SO libspdk_blob.so.11.0 00:04:14.404 SO libspdk_nvme.so.14.0 00:04:14.663 SYMLINK libspdk_blob.so 00:04:14.663 SYMLINK libspdk_nvme.so 00:04:14.921 CC lib/blobfs/tree.o 00:04:14.921 CC lib/lvol/lvol.o 00:04:14.921 CC lib/blobfs/blobfs.o 00:04:15.487 LIB libspdk_bdev.a 00:04:15.746 SO libspdk_bdev.so.17.0 00:04:15.746 SYMLINK libspdk_bdev.so 00:04:16.005 LIB libspdk_blobfs.a 00:04:16.005 SO libspdk_blobfs.so.10.0 00:04:16.005 CC lib/nvmf/ctrlr_discovery.o 00:04:16.005 CC lib/nvmf/ctrlr.o 00:04:16.005 CC lib/nvmf/ctrlr_bdev.o 00:04:16.005 CC lib/nvmf/subsystem.o 00:04:16.005 CC lib/scsi/dev.o 00:04:16.005 CC lib/ftl/ftl_core.o 00:04:16.005 CC lib/nbd/nbd.o 00:04:16.005 CC lib/ublk/ublk.o 00:04:16.263 SYMLINK libspdk_blobfs.so 00:04:16.263 CC lib/nbd/nbd_rpc.o 00:04:16.263 LIB libspdk_lvol.a 00:04:16.263 SO libspdk_lvol.so.10.0 00:04:16.263 SYMLINK libspdk_lvol.so 00:04:16.263 CC lib/ublk/ublk_rpc.o 00:04:16.264 CC lib/scsi/lun.o 00:04:16.264 CC lib/scsi/port.o 00:04:16.522 CC lib/ftl/ftl_init.o 00:04:16.522 CC lib/ftl/ftl_layout.o 00:04:16.522 CC lib/scsi/scsi.o 00:04:16.522 LIB libspdk_nbd.a 00:04:16.522 SO libspdk_nbd.so.7.0 00:04:16.522 CC lib/scsi/scsi_bdev.o 00:04:16.780 CC lib/nvmf/nvmf.o 00:04:16.780 SYMLINK libspdk_nbd.so 00:04:16.780 CC lib/ftl/ftl_debug.o 00:04:16.780 CC lib/nvmf/nvmf_rpc.o 00:04:16.780 CC lib/ftl/ftl_io.o 00:04:17.133 CC lib/nvmf/transport.o 00:04:17.133 CC lib/nvmf/tcp.o 00:04:17.133 CC lib/nvmf/stubs.o 00:04:17.133 CC lib/ftl/ftl_sb.o 00:04:17.133 LIB libspdk_ublk.a 00:04:17.133 SO libspdk_ublk.so.3.0 00:04:17.405 CC lib/scsi/scsi_pr.o 00:04:17.405 SYMLINK libspdk_ublk.so 00:04:17.405 CC lib/ftl/ftl_l2p.o 00:04:17.405 CC lib/nvmf/mdns_server.o 00:04:17.405 CC lib/scsi/scsi_rpc.o 00:04:17.405 CC lib/ftl/ftl_l2p_flat.o 00:04:17.664 CC lib/nvmf/rdma.o 00:04:17.664 CC lib/scsi/task.o 00:04:17.664 CC lib/nvmf/auth.o 00:04:17.664 CC lib/ftl/ftl_nv_cache.o 00:04:17.664 CC lib/ftl/ftl_band.o 00:04:17.664 CC lib/ftl/ftl_band_ops.o 00:04:17.664 CC lib/ftl/ftl_writer.o 00:04:17.923 CC lib/ftl/ftl_rq.o 00:04:17.923 LIB libspdk_scsi.a 00:04:17.923 SO libspdk_scsi.so.9.0 00:04:17.923 CC lib/ftl/ftl_reloc.o 00:04:18.181 CC lib/ftl/ftl_l2p_cache.o 00:04:18.181 SYMLINK libspdk_scsi.so 00:04:18.181 CC lib/ftl/ftl_p2l.o 00:04:18.181 CC lib/ftl/ftl_p2l_log.o 00:04:18.181 CC lib/ftl/mngt/ftl_mngt.o 00:04:18.440 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:18.440 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:18.440 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:18.699 CC lib/iscsi/conn.o 00:04:18.699 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:18.699 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:18.699 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:18.699 CC lib/vhost/vhost.o 00:04:18.699 CC lib/vhost/vhost_rpc.o 00:04:18.699 CC lib/vhost/vhost_scsi.o 00:04:18.959 CC lib/vhost/vhost_blk.o 00:04:18.959 CC lib/vhost/rte_vhost_user.o 00:04:18.959 CC lib/iscsi/init_grp.o 00:04:18.959 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:18.959 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:19.218 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:19.218 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:19.477 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:19.477 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:19.477 CC lib/iscsi/iscsi.o 00:04:19.477 CC lib/iscsi/param.o 00:04:19.477 CC lib/ftl/utils/ftl_conf.o 00:04:19.477 CC lib/iscsi/portal_grp.o 00:04:19.477 CC lib/ftl/utils/ftl_md.o 00:04:19.736 CC lib/iscsi/tgt_node.o 00:04:19.736 CC lib/ftl/utils/ftl_mempool.o 00:04:19.736 CC lib/iscsi/iscsi_subsystem.o 00:04:19.736 CC lib/iscsi/iscsi_rpc.o 00:04:19.736 CC lib/iscsi/task.o 00:04:19.736 CC lib/ftl/utils/ftl_bitmap.o 00:04:19.995 CC lib/ftl/utils/ftl_property.o 00:04:19.995 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:19.995 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:19.995 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:20.253 LIB libspdk_vhost.a 00:04:20.253 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:20.253 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:20.253 SO libspdk_vhost.so.8.0 00:04:20.253 LIB libspdk_nvmf.a 00:04:20.253 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:20.253 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:20.253 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:20.253 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:20.253 SYMLINK libspdk_vhost.so 00:04:20.253 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:20.253 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:20.253 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:20.512 SO libspdk_nvmf.so.19.0 00:04:20.512 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:20.512 CC lib/ftl/base/ftl_base_dev.o 00:04:20.512 CC lib/ftl/base/ftl_base_bdev.o 00:04:20.512 CC lib/ftl/ftl_trace.o 00:04:20.772 SYMLINK libspdk_nvmf.so 00:04:20.772 LIB libspdk_ftl.a 00:04:21.032 LIB libspdk_iscsi.a 00:04:21.032 SO libspdk_ftl.so.9.0 00:04:21.291 SO libspdk_iscsi.so.8.0 00:04:21.291 SYMLINK libspdk_iscsi.so 00:04:21.291 SYMLINK libspdk_ftl.so 00:04:21.860 CC module/env_dpdk/env_dpdk_rpc.o 00:04:21.860 CC module/sock/posix/posix.o 00:04:21.860 CC module/accel/ioat/accel_ioat.o 00:04:21.860 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:21.860 CC module/accel/error/accel_error.o 00:04:21.860 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:21.860 CC module/keyring/file/keyring.o 00:04:21.860 CC module/blob/bdev/blob_bdev.o 00:04:21.860 CC module/fsdev/aio/fsdev_aio.o 00:04:21.860 CC module/scheduler/gscheduler/gscheduler.o 00:04:21.860 LIB libspdk_env_dpdk_rpc.a 00:04:21.860 SO libspdk_env_dpdk_rpc.so.6.0 00:04:22.119 SYMLINK libspdk_env_dpdk_rpc.so 00:04:22.119 CC module/accel/error/accel_error_rpc.o 00:04:22.119 CC module/keyring/file/keyring_rpc.o 00:04:22.119 LIB libspdk_scheduler_dpdk_governor.a 00:04:22.119 LIB libspdk_scheduler_gscheduler.a 00:04:22.119 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:22.119 CC module/accel/ioat/accel_ioat_rpc.o 00:04:22.119 SO libspdk_scheduler_gscheduler.so.4.0 00:04:22.119 LIB libspdk_scheduler_dynamic.a 00:04:22.119 SO libspdk_scheduler_dynamic.so.4.0 00:04:22.120 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:22.120 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:22.120 SYMLINK libspdk_scheduler_gscheduler.so 00:04:22.120 CC module/fsdev/aio/linux_aio_mgr.o 00:04:22.120 LIB libspdk_keyring_file.a 00:04:22.120 LIB libspdk_blob_bdev.a 00:04:22.120 LIB libspdk_accel_error.a 00:04:22.120 SYMLINK libspdk_scheduler_dynamic.so 00:04:22.120 LIB libspdk_accel_ioat.a 00:04:22.120 SO libspdk_keyring_file.so.2.0 00:04:22.120 SO libspdk_blob_bdev.so.11.0 00:04:22.120 SO libspdk_accel_error.so.2.0 00:04:22.400 SO libspdk_accel_ioat.so.6.0 00:04:22.400 CC module/accel/dsa/accel_dsa.o 00:04:22.400 SYMLINK libspdk_keyring_file.so 00:04:22.400 SYMLINK libspdk_blob_bdev.so 00:04:22.400 SYMLINK libspdk_accel_error.so 00:04:22.400 CC module/accel/dsa/accel_dsa_rpc.o 00:04:22.401 SYMLINK libspdk_accel_ioat.so 00:04:22.401 CC module/accel/iaa/accel_iaa.o 00:04:22.401 CC module/accel/iaa/accel_iaa_rpc.o 00:04:22.401 CC module/keyring/linux/keyring.o 00:04:22.661 CC module/bdev/gpt/gpt.o 00:04:22.661 CC module/bdev/delay/vbdev_delay.o 00:04:22.661 LIB libspdk_accel_dsa.a 00:04:22.661 CC module/blobfs/bdev/blobfs_bdev.o 00:04:22.661 CC module/bdev/error/vbdev_error.o 00:04:22.661 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:22.661 SO libspdk_accel_dsa.so.5.0 00:04:22.661 LIB libspdk_accel_iaa.a 00:04:22.661 CC module/keyring/linux/keyring_rpc.o 00:04:22.661 LIB libspdk_fsdev_aio.a 00:04:22.661 SO libspdk_accel_iaa.so.3.0 00:04:22.661 SO libspdk_fsdev_aio.so.1.0 00:04:22.661 SYMLINK libspdk_accel_dsa.so 00:04:22.661 CC module/bdev/error/vbdev_error_rpc.o 00:04:22.661 LIB libspdk_sock_posix.a 00:04:22.661 SYMLINK libspdk_accel_iaa.so 00:04:22.661 LIB libspdk_keyring_linux.a 00:04:22.661 SYMLINK libspdk_fsdev_aio.so 00:04:22.661 CC module/bdev/gpt/vbdev_gpt.o 00:04:22.661 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:22.661 LIB libspdk_blobfs_bdev.a 00:04:22.920 SO libspdk_sock_posix.so.6.0 00:04:22.920 SO libspdk_keyring_linux.so.1.0 00:04:22.920 SO libspdk_blobfs_bdev.so.6.0 00:04:22.920 SYMLINK libspdk_keyring_linux.so 00:04:22.920 SYMLINK libspdk_sock_posix.so 00:04:22.920 SYMLINK libspdk_blobfs_bdev.so 00:04:22.920 LIB libspdk_bdev_error.a 00:04:22.920 SO libspdk_bdev_error.so.6.0 00:04:22.920 CC module/bdev/lvol/vbdev_lvol.o 00:04:22.920 CC module/bdev/malloc/bdev_malloc.o 00:04:22.920 LIB libspdk_bdev_delay.a 00:04:22.920 SYMLINK libspdk_bdev_error.so 00:04:22.920 SO libspdk_bdev_delay.so.6.0 00:04:22.920 CC module/bdev/null/bdev_null.o 00:04:23.178 CC module/bdev/raid/bdev_raid.o 00:04:23.178 CC module/bdev/nvme/bdev_nvme.o 00:04:23.178 LIB libspdk_bdev_gpt.a 00:04:23.178 CC module/bdev/passthru/vbdev_passthru.o 00:04:23.178 SO libspdk_bdev_gpt.so.6.0 00:04:23.178 CC module/bdev/split/vbdev_split.o 00:04:23.178 SYMLINK libspdk_bdev_delay.so 00:04:23.178 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:23.178 SYMLINK libspdk_bdev_gpt.so 00:04:23.178 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:23.178 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:23.437 CC module/bdev/null/bdev_null_rpc.o 00:04:23.437 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:23.437 CC module/bdev/split/vbdev_split_rpc.o 00:04:23.437 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:23.437 LIB libspdk_bdev_passthru.a 00:04:23.437 CC module/bdev/nvme/nvme_rpc.o 00:04:23.437 SO libspdk_bdev_passthru.so.6.0 00:04:23.437 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:23.437 LIB libspdk_bdev_split.a 00:04:23.437 LIB libspdk_bdev_null.a 00:04:23.437 SO libspdk_bdev_split.so.6.0 00:04:23.437 LIB libspdk_bdev_malloc.a 00:04:23.437 SO libspdk_bdev_null.so.6.0 00:04:23.437 LIB libspdk_bdev_zone_block.a 00:04:23.697 SYMLINK libspdk_bdev_passthru.so 00:04:23.697 CC module/bdev/nvme/bdev_mdns_client.o 00:04:23.697 SO libspdk_bdev_malloc.so.6.0 00:04:23.697 SO libspdk_bdev_zone_block.so.6.0 00:04:23.697 SYMLINK libspdk_bdev_split.so 00:04:23.697 SYMLINK libspdk_bdev_null.so 00:04:23.697 CC module/bdev/raid/bdev_raid_rpc.o 00:04:23.697 CC module/bdev/raid/bdev_raid_sb.o 00:04:23.697 SYMLINK libspdk_bdev_zone_block.so 00:04:23.697 SYMLINK libspdk_bdev_malloc.so 00:04:23.697 CC module/bdev/nvme/vbdev_opal.o 00:04:23.697 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:23.697 CC module/bdev/aio/bdev_aio.o 00:04:23.697 CC module/bdev/aio/bdev_aio_rpc.o 00:04:23.957 CC module/bdev/raid/raid0.o 00:04:23.957 CC module/bdev/ftl/bdev_ftl.o 00:04:23.957 LIB libspdk_bdev_lvol.a 00:04:23.957 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:23.957 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:23.957 SO libspdk_bdev_lvol.so.6.0 00:04:23.957 CC module/bdev/raid/raid1.o 00:04:23.957 SYMLINK libspdk_bdev_lvol.so 00:04:24.217 CC module/bdev/raid/concat.o 00:04:24.217 CC module/bdev/raid/raid5f.o 00:04:24.217 CC module/bdev/iscsi/bdev_iscsi.o 00:04:24.217 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:24.217 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:24.217 LIB libspdk_bdev_aio.a 00:04:24.217 LIB libspdk_bdev_ftl.a 00:04:24.217 SO libspdk_bdev_aio.so.6.0 00:04:24.217 SO libspdk_bdev_ftl.so.6.0 00:04:24.217 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:24.217 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:24.217 SYMLINK libspdk_bdev_aio.so 00:04:24.217 SYMLINK libspdk_bdev_ftl.so 00:04:24.476 LIB libspdk_bdev_iscsi.a 00:04:24.476 SO libspdk_bdev_iscsi.so.6.0 00:04:24.736 LIB libspdk_bdev_raid.a 00:04:24.736 SYMLINK libspdk_bdev_iscsi.so 00:04:24.736 LIB libspdk_bdev_virtio.a 00:04:24.736 SO libspdk_bdev_raid.so.6.0 00:04:24.736 SO libspdk_bdev_virtio.so.6.0 00:04:24.736 SYMLINK libspdk_bdev_virtio.so 00:04:24.995 SYMLINK libspdk_bdev_raid.so 00:04:25.565 LIB libspdk_bdev_nvme.a 00:04:25.825 SO libspdk_bdev_nvme.so.7.0 00:04:25.825 SYMLINK libspdk_bdev_nvme.so 00:04:26.760 CC module/event/subsystems/scheduler/scheduler.o 00:04:26.760 CC module/event/subsystems/sock/sock.o 00:04:26.760 CC module/event/subsystems/iobuf/iobuf.o 00:04:26.760 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:26.760 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:26.760 CC module/event/subsystems/vmd/vmd.o 00:04:26.760 CC module/event/subsystems/fsdev/fsdev.o 00:04:26.760 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:26.760 CC module/event/subsystems/keyring/keyring.o 00:04:26.760 LIB libspdk_event_sock.a 00:04:26.760 LIB libspdk_event_fsdev.a 00:04:26.760 LIB libspdk_event_vhost_blk.a 00:04:26.760 LIB libspdk_event_scheduler.a 00:04:26.760 LIB libspdk_event_keyring.a 00:04:26.760 LIB libspdk_event_vmd.a 00:04:26.760 LIB libspdk_event_iobuf.a 00:04:26.760 SO libspdk_event_sock.so.5.0 00:04:26.760 SO libspdk_event_vhost_blk.so.3.0 00:04:26.760 SO libspdk_event_fsdev.so.1.0 00:04:26.760 SO libspdk_event_scheduler.so.4.0 00:04:26.760 SO libspdk_event_keyring.so.1.0 00:04:26.760 SO libspdk_event_vmd.so.6.0 00:04:26.760 SO libspdk_event_iobuf.so.3.0 00:04:26.760 SYMLINK libspdk_event_sock.so 00:04:26.760 SYMLINK libspdk_event_vhost_blk.so 00:04:26.760 SYMLINK libspdk_event_fsdev.so 00:04:26.760 SYMLINK libspdk_event_keyring.so 00:04:26.760 SYMLINK libspdk_event_scheduler.so 00:04:26.760 SYMLINK libspdk_event_vmd.so 00:04:26.760 SYMLINK libspdk_event_iobuf.so 00:04:27.328 CC module/event/subsystems/accel/accel.o 00:04:27.328 LIB libspdk_event_accel.a 00:04:27.328 SO libspdk_event_accel.so.6.0 00:04:27.588 SYMLINK libspdk_event_accel.so 00:04:27.847 CC module/event/subsystems/bdev/bdev.o 00:04:28.106 LIB libspdk_event_bdev.a 00:04:28.106 SO libspdk_event_bdev.so.6.0 00:04:28.106 SYMLINK libspdk_event_bdev.so 00:04:28.366 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.366 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.626 CC module/event/subsystems/nbd/nbd.o 00:04:28.626 CC module/event/subsystems/ublk/ublk.o 00:04:28.626 CC module/event/subsystems/scsi/scsi.o 00:04:28.626 LIB libspdk_event_nbd.a 00:04:28.626 LIB libspdk_event_ublk.a 00:04:28.626 SO libspdk_event_nbd.so.6.0 00:04:28.626 LIB libspdk_event_scsi.a 00:04:28.626 SO libspdk_event_ublk.so.3.0 00:04:28.626 SO libspdk_event_scsi.so.6.0 00:04:28.626 LIB libspdk_event_nvmf.a 00:04:28.626 SYMLINK libspdk_event_nbd.so 00:04:28.885 SYMLINK libspdk_event_ublk.so 00:04:28.885 SO libspdk_event_nvmf.so.6.0 00:04:28.885 SYMLINK libspdk_event_scsi.so 00:04:28.885 SYMLINK libspdk_event_nvmf.so 00:04:29.144 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.144 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.403 LIB libspdk_event_iscsi.a 00:04:29.403 LIB libspdk_event_vhost_scsi.a 00:04:29.403 SO libspdk_event_iscsi.so.6.0 00:04:29.403 SO libspdk_event_vhost_scsi.so.3.0 00:04:29.403 SYMLINK libspdk_event_iscsi.so 00:04:29.403 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.663 SO libspdk.so.6.0 00:04:29.663 SYMLINK libspdk.so 00:04:29.928 CXX app/trace/trace.o 00:04:29.928 CC app/trace_record/trace_record.o 00:04:29.928 CC app/spdk_lspci/spdk_lspci.o 00:04:29.928 CC app/spdk_nvme_perf/perf.o 00:04:30.198 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.198 CC app/nvmf_tgt/nvmf_main.o 00:04:30.198 CC app/spdk_tgt/spdk_tgt.o 00:04:30.198 CC test/thread/poller_perf/poller_perf.o 00:04:30.198 CC test/dma/test_dma/test_dma.o 00:04:30.198 LINK spdk_lspci 00:04:30.198 CC examples/util/zipf/zipf.o 00:04:30.198 LINK nvmf_tgt 00:04:30.198 LINK poller_perf 00:04:30.198 LINK iscsi_tgt 00:04:30.198 LINK spdk_trace_record 00:04:30.460 LINK zipf 00:04:30.460 LINK spdk_tgt 00:04:30.460 LINK spdk_trace 00:04:30.460 CC app/spdk_nvme_identify/identify.o 00:04:30.460 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.460 CC app/spdk_top/spdk_top.o 00:04:30.719 CC app/spdk_dd/spdk_dd.o 00:04:30.719 CC examples/ioat/perf/perf.o 00:04:30.719 LINK test_dma 00:04:30.719 CC test/app/bdev_svc/bdev_svc.o 00:04:30.719 CC test/app/histogram_perf/histogram_perf.o 00:04:30.719 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.719 LINK spdk_nvme_discover 00:04:30.978 LINK histogram_perf 00:04:30.978 LINK bdev_svc 00:04:30.978 LINK ioat_perf 00:04:30.978 LINK spdk_nvme_perf 00:04:30.978 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:30.978 LINK spdk_dd 00:04:30.978 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:31.237 CC examples/ioat/verify/verify.o 00:04:31.237 LINK nvme_fuzz 00:04:31.237 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:31.237 CC app/fio/nvme/fio_plugin.o 00:04:31.237 TEST_HEADER include/spdk/accel.h 00:04:31.237 TEST_HEADER include/spdk/accel_module.h 00:04:31.237 TEST_HEADER include/spdk/assert.h 00:04:31.237 CC app/vhost/vhost.o 00:04:31.237 TEST_HEADER include/spdk/barrier.h 00:04:31.237 TEST_HEADER include/spdk/base64.h 00:04:31.237 TEST_HEADER include/spdk/bdev.h 00:04:31.237 TEST_HEADER include/spdk/bdev_module.h 00:04:31.237 TEST_HEADER include/spdk/bdev_zone.h 00:04:31.237 TEST_HEADER include/spdk/bit_array.h 00:04:31.237 TEST_HEADER include/spdk/bit_pool.h 00:04:31.237 TEST_HEADER include/spdk/blob_bdev.h 00:04:31.237 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:31.237 TEST_HEADER include/spdk/blobfs.h 00:04:31.237 TEST_HEADER include/spdk/blob.h 00:04:31.237 TEST_HEADER include/spdk/conf.h 00:04:31.237 TEST_HEADER include/spdk/config.h 00:04:31.237 TEST_HEADER include/spdk/cpuset.h 00:04:31.237 TEST_HEADER include/spdk/crc16.h 00:04:31.237 TEST_HEADER include/spdk/crc32.h 00:04:31.237 TEST_HEADER include/spdk/crc64.h 00:04:31.237 TEST_HEADER include/spdk/dif.h 00:04:31.237 TEST_HEADER include/spdk/dma.h 00:04:31.237 TEST_HEADER include/spdk/endian.h 00:04:31.237 TEST_HEADER include/spdk/env_dpdk.h 00:04:31.237 TEST_HEADER include/spdk/env.h 00:04:31.237 TEST_HEADER include/spdk/event.h 00:04:31.237 TEST_HEADER include/spdk/fd_group.h 00:04:31.237 TEST_HEADER include/spdk/fd.h 00:04:31.237 TEST_HEADER include/spdk/file.h 00:04:31.237 TEST_HEADER include/spdk/fsdev.h 00:04:31.237 TEST_HEADER include/spdk/fsdev_module.h 00:04:31.237 TEST_HEADER include/spdk/ftl.h 00:04:31.237 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:31.237 TEST_HEADER include/spdk/gpt_spec.h 00:04:31.237 TEST_HEADER include/spdk/hexlify.h 00:04:31.237 TEST_HEADER include/spdk/histogram_data.h 00:04:31.237 TEST_HEADER include/spdk/idxd.h 00:04:31.237 TEST_HEADER include/spdk/idxd_spec.h 00:04:31.237 TEST_HEADER include/spdk/init.h 00:04:31.237 TEST_HEADER include/spdk/ioat.h 00:04:31.237 TEST_HEADER include/spdk/ioat_spec.h 00:04:31.237 TEST_HEADER include/spdk/iscsi_spec.h 00:04:31.237 TEST_HEADER include/spdk/json.h 00:04:31.237 TEST_HEADER include/spdk/jsonrpc.h 00:04:31.237 TEST_HEADER include/spdk/keyring.h 00:04:31.237 TEST_HEADER include/spdk/keyring_module.h 00:04:31.237 TEST_HEADER include/spdk/likely.h 00:04:31.237 TEST_HEADER include/spdk/log.h 00:04:31.237 TEST_HEADER include/spdk/lvol.h 00:04:31.496 TEST_HEADER include/spdk/md5.h 00:04:31.496 TEST_HEADER include/spdk/memory.h 00:04:31.496 TEST_HEADER include/spdk/mmio.h 00:04:31.496 TEST_HEADER include/spdk/nbd.h 00:04:31.496 TEST_HEADER include/spdk/net.h 00:04:31.496 TEST_HEADER include/spdk/notify.h 00:04:31.496 TEST_HEADER include/spdk/nvme.h 00:04:31.496 TEST_HEADER include/spdk/nvme_intel.h 00:04:31.496 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:31.496 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:31.496 TEST_HEADER include/spdk/nvme_spec.h 00:04:31.496 TEST_HEADER include/spdk/nvme_zns.h 00:04:31.496 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:31.496 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:31.496 TEST_HEADER include/spdk/nvmf.h 00:04:31.496 TEST_HEADER include/spdk/nvmf_spec.h 00:04:31.496 TEST_HEADER include/spdk/nvmf_transport.h 00:04:31.496 TEST_HEADER include/spdk/opal.h 00:04:31.496 TEST_HEADER include/spdk/opal_spec.h 00:04:31.496 TEST_HEADER include/spdk/pci_ids.h 00:04:31.496 TEST_HEADER include/spdk/pipe.h 00:04:31.496 TEST_HEADER include/spdk/queue.h 00:04:31.496 TEST_HEADER include/spdk/reduce.h 00:04:31.496 TEST_HEADER include/spdk/rpc.h 00:04:31.496 TEST_HEADER include/spdk/scheduler.h 00:04:31.496 TEST_HEADER include/spdk/scsi.h 00:04:31.496 TEST_HEADER include/spdk/scsi_spec.h 00:04:31.496 TEST_HEADER include/spdk/sock.h 00:04:31.496 TEST_HEADER include/spdk/stdinc.h 00:04:31.496 TEST_HEADER include/spdk/string.h 00:04:31.496 TEST_HEADER include/spdk/thread.h 00:04:31.496 TEST_HEADER include/spdk/trace.h 00:04:31.496 TEST_HEADER include/spdk/trace_parser.h 00:04:31.496 TEST_HEADER include/spdk/tree.h 00:04:31.496 TEST_HEADER include/spdk/ublk.h 00:04:31.496 TEST_HEADER include/spdk/util.h 00:04:31.496 TEST_HEADER include/spdk/uuid.h 00:04:31.496 TEST_HEADER include/spdk/version.h 00:04:31.496 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:31.496 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:31.496 TEST_HEADER include/spdk/vhost.h 00:04:31.496 TEST_HEADER include/spdk/vmd.h 00:04:31.496 TEST_HEADER include/spdk/xor.h 00:04:31.496 LINK verify 00:04:31.496 TEST_HEADER include/spdk/zipf.h 00:04:31.496 LINK vhost 00:04:31.496 CXX test/cpp_headers/accel.o 00:04:31.496 CC test/env/vtophys/vtophys.o 00:04:31.496 CC test/env/mem_callbacks/mem_callbacks.o 00:04:31.496 LINK spdk_nvme_identify 00:04:31.497 LINK spdk_top 00:04:31.755 CXX test/cpp_headers/accel_module.o 00:04:31.755 LINK vtophys 00:04:31.755 CXX test/cpp_headers/assert.o 00:04:31.755 LINK vhost_fuzz 00:04:31.755 CC examples/vmd/lsvmd/lsvmd.o 00:04:31.755 CC examples/vmd/led/led.o 00:04:31.755 CXX test/cpp_headers/barrier.o 00:04:31.755 LINK spdk_nvme 00:04:32.013 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:32.013 CC test/app/jsoncat/jsoncat.o 00:04:32.013 CC app/fio/bdev/fio_plugin.o 00:04:32.013 LINK lsvmd 00:04:32.013 CC test/app/stub/stub.o 00:04:32.014 LINK led 00:04:32.014 CXX test/cpp_headers/base64.o 00:04:32.014 LINK jsoncat 00:04:32.014 LINK mem_callbacks 00:04:32.014 LINK env_dpdk_post_init 00:04:32.272 CC test/event/event_perf/event_perf.o 00:04:32.272 LINK stub 00:04:32.272 CXX test/cpp_headers/bdev.o 00:04:32.272 CXX test/cpp_headers/bdev_module.o 00:04:32.272 CC test/event/reactor/reactor.o 00:04:32.272 CXX test/cpp_headers/bdev_zone.o 00:04:32.272 LINK event_perf 00:04:32.530 CC test/env/memory/memory_ut.o 00:04:32.530 LINK reactor 00:04:32.530 CC examples/idxd/perf/perf.o 00:04:32.530 CXX test/cpp_headers/bit_array.o 00:04:32.530 CXX test/cpp_headers/bit_pool.o 00:04:32.530 LINK spdk_bdev 00:04:32.530 CC test/rpc_client/rpc_client_test.o 00:04:32.530 CC test/nvme/aer/aer.o 00:04:32.530 CC test/nvme/reset/reset.o 00:04:32.530 CXX test/cpp_headers/blob_bdev.o 00:04:32.530 CC test/event/reactor_perf/reactor_perf.o 00:04:32.791 CC test/event/app_repeat/app_repeat.o 00:04:32.791 LINK rpc_client_test 00:04:32.791 CC test/event/scheduler/scheduler.o 00:04:32.791 LINK reactor_perf 00:04:32.791 LINK idxd_perf 00:04:32.791 CXX test/cpp_headers/blobfs_bdev.o 00:04:32.791 LINK reset 00:04:33.052 LINK app_repeat 00:04:33.052 LINK aer 00:04:33.052 LINK iscsi_fuzz 00:04:33.052 CC test/env/pci/pci_ut.o 00:04:33.052 CXX test/cpp_headers/blobfs.o 00:04:33.052 LINK scheduler 00:04:33.052 CXX test/cpp_headers/blob.o 00:04:33.052 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.312 CC test/nvme/sgl/sgl.o 00:04:33.312 CC test/nvme/e2edp/nvme_dp.o 00:04:33.312 CC test/accel/dif/dif.o 00:04:33.312 CXX test/cpp_headers/conf.o 00:04:33.312 CXX test/cpp_headers/config.o 00:04:33.312 CXX test/cpp_headers/cpuset.o 00:04:33.312 LINK interrupt_tgt 00:04:33.312 CC examples/sock/hello_world/hello_sock.o 00:04:33.571 CC examples/thread/thread/thread_ex.o 00:04:33.571 LINK pci_ut 00:04:33.571 LINK sgl 00:04:33.571 LINK nvme_dp 00:04:33.571 CXX test/cpp_headers/crc16.o 00:04:33.571 CC test/nvme/overhead/overhead.o 00:04:33.571 CXX test/cpp_headers/crc32.o 00:04:33.571 LINK memory_ut 00:04:33.571 CC test/nvme/err_injection/err_injection.o 00:04:33.831 CXX test/cpp_headers/crc64.o 00:04:33.831 LINK hello_sock 00:04:33.831 LINK thread 00:04:33.831 CXX test/cpp_headers/dif.o 00:04:33.831 CC test/nvme/startup/startup.o 00:04:33.831 LINK overhead 00:04:33.831 LINK err_injection 00:04:33.831 CXX test/cpp_headers/dma.o 00:04:34.091 CC test/nvme/reserve/reserve.o 00:04:34.091 CC test/nvme/simple_copy/simple_copy.o 00:04:34.091 CC test/nvme/connect_stress/connect_stress.o 00:04:34.091 LINK startup 00:04:34.091 LINK dif 00:04:34.091 CXX test/cpp_headers/endian.o 00:04:34.091 CC test/blobfs/mkfs/mkfs.o 00:04:34.091 CC examples/nvme/hello_world/hello_world.o 00:04:34.091 CC examples/nvme/reconnect/reconnect.o 00:04:34.091 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:34.091 LINK connect_stress 00:04:34.091 LINK reserve 00:04:34.384 LINK simple_copy 00:04:34.384 CC examples/nvme/arbitration/arbitration.o 00:04:34.384 CXX test/cpp_headers/env_dpdk.o 00:04:34.384 LINK mkfs 00:04:34.384 CXX test/cpp_headers/env.o 00:04:34.384 CC examples/nvme/hotplug/hotplug.o 00:04:34.384 CXX test/cpp_headers/event.o 00:04:34.384 LINK hello_world 00:04:34.643 CC test/nvme/boot_partition/boot_partition.o 00:04:34.643 LINK reconnect 00:04:34.643 CXX test/cpp_headers/fd_group.o 00:04:34.643 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:34.643 CXX test/cpp_headers/fd.o 00:04:34.643 CC examples/nvme/abort/abort.o 00:04:34.643 CC test/nvme/compliance/nvme_compliance.o 00:04:34.643 LINK arbitration 00:04:34.643 LINK hotplug 00:04:34.643 LINK boot_partition 00:04:34.902 LINK nvme_manage 00:04:34.902 CXX test/cpp_headers/file.o 00:04:34.902 LINK cmb_copy 00:04:34.902 CXX test/cpp_headers/fsdev.o 00:04:34.902 CXX test/cpp_headers/fsdev_module.o 00:04:34.902 CXX test/cpp_headers/ftl.o 00:04:34.902 CC test/bdev/bdevio/bdevio.o 00:04:35.160 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.160 CC test/lvol/esnap/esnap.o 00:04:35.160 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.160 LINK nvme_compliance 00:04:35.160 LINK abort 00:04:35.160 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.160 CXX test/cpp_headers/fuse_dispatcher.o 00:04:35.160 CC test/nvme/cuse/cuse.o 00:04:35.160 CC test/nvme/fdp/fdp.o 00:04:35.160 LINK pmr_persistence 00:04:35.421 LINK fused_ordering 00:04:35.421 LINK doorbell_aers 00:04:35.421 CXX test/cpp_headers/gpt_spec.o 00:04:35.421 CXX test/cpp_headers/hexlify.o 00:04:35.421 LINK bdevio 00:04:35.421 CC examples/accel/perf/accel_perf.o 00:04:35.421 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.421 CXX test/cpp_headers/histogram_data.o 00:04:35.421 CXX test/cpp_headers/idxd.o 00:04:35.421 LINK fdp 00:04:35.680 CXX test/cpp_headers/idxd_spec.o 00:04:35.680 CXX test/cpp_headers/init.o 00:04:35.680 CXX test/cpp_headers/ioat.o 00:04:35.680 CC examples/blob/hello_world/hello_blob.o 00:04:35.680 CXX test/cpp_headers/ioat_spec.o 00:04:35.680 CXX test/cpp_headers/iscsi_spec.o 00:04:35.680 CC examples/blob/cli/blobcli.o 00:04:35.680 LINK hello_fsdev 00:04:35.680 CXX test/cpp_headers/json.o 00:04:35.939 CXX test/cpp_headers/jsonrpc.o 00:04:35.939 CXX test/cpp_headers/keyring.o 00:04:35.939 CXX test/cpp_headers/keyring_module.o 00:04:35.939 LINK hello_blob 00:04:35.939 CXX test/cpp_headers/likely.o 00:04:35.939 CXX test/cpp_headers/log.o 00:04:35.939 CXX test/cpp_headers/lvol.o 00:04:35.939 LINK accel_perf 00:04:35.939 CXX test/cpp_headers/md5.o 00:04:35.939 CXX test/cpp_headers/memory.o 00:04:36.199 CXX test/cpp_headers/mmio.o 00:04:36.199 CXX test/cpp_headers/nbd.o 00:04:36.199 CXX test/cpp_headers/net.o 00:04:36.199 CXX test/cpp_headers/notify.o 00:04:36.199 CXX test/cpp_headers/nvme.o 00:04:36.199 CXX test/cpp_headers/nvme_intel.o 00:04:36.199 LINK blobcli 00:04:36.199 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.199 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.199 CXX test/cpp_headers/nvme_spec.o 00:04:36.459 CXX test/cpp_headers/nvme_zns.o 00:04:36.459 CC examples/bdev/hello_world/hello_bdev.o 00:04:36.459 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.459 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.459 LINK cuse 00:04:36.459 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.459 CXX test/cpp_headers/nvmf.o 00:04:36.459 CXX test/cpp_headers/nvmf_spec.o 00:04:36.459 CXX test/cpp_headers/nvmf_transport.o 00:04:36.459 CXX test/cpp_headers/opal.o 00:04:36.718 CXX test/cpp_headers/opal_spec.o 00:04:36.718 LINK hello_bdev 00:04:36.718 CXX test/cpp_headers/pci_ids.o 00:04:36.718 CXX test/cpp_headers/pipe.o 00:04:36.718 CXX test/cpp_headers/queue.o 00:04:36.718 CXX test/cpp_headers/reduce.o 00:04:36.718 CXX test/cpp_headers/rpc.o 00:04:36.718 CXX test/cpp_headers/scheduler.o 00:04:36.718 CXX test/cpp_headers/scsi.o 00:04:36.718 CXX test/cpp_headers/scsi_spec.o 00:04:36.718 CXX test/cpp_headers/sock.o 00:04:36.718 CXX test/cpp_headers/stdinc.o 00:04:36.718 CXX test/cpp_headers/string.o 00:04:36.718 CXX test/cpp_headers/thread.o 00:04:36.977 CXX test/cpp_headers/trace.o 00:04:36.977 CXX test/cpp_headers/trace_parser.o 00:04:36.977 CXX test/cpp_headers/tree.o 00:04:36.977 CXX test/cpp_headers/ublk.o 00:04:36.977 CXX test/cpp_headers/util.o 00:04:36.977 CXX test/cpp_headers/uuid.o 00:04:36.977 CXX test/cpp_headers/version.o 00:04:36.977 CXX test/cpp_headers/vfio_user_pci.o 00:04:36.977 CXX test/cpp_headers/vfio_user_spec.o 00:04:36.977 CXX test/cpp_headers/vhost.o 00:04:36.977 CXX test/cpp_headers/vmd.o 00:04:36.977 CXX test/cpp_headers/xor.o 00:04:37.237 CXX test/cpp_headers/zipf.o 00:04:37.237 LINK bdevperf 00:04:38.179 CC examples/nvmf/nvmf/nvmf.o 00:04:38.468 LINK nvmf 00:04:40.999 LINK esnap 00:04:41.563 00:04:41.563 real 1m26.936s 00:04:41.563 user 7m40.665s 00:04:41.563 sys 1m53.052s 00:04:41.563 ************************************ 00:04:41.563 END TEST make 00:04:41.563 ************************************ 00:04:41.563 22:21:37 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:41.563 22:21:37 make -- common/autotest_common.sh@10 -- $ set +x 00:04:41.563 22:21:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:41.563 22:21:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:41.563 22:21:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:41.563 22:21:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.563 22:21:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:41.563 22:21:37 -- pm/common@44 -- $ pid=5241 00:04:41.563 22:21:37 -- pm/common@50 -- $ kill -TERM 5241 00:04:41.563 22:21:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.563 22:21:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:41.563 22:21:37 -- pm/common@44 -- $ pid=5243 00:04:41.563 22:21:37 -- pm/common@50 -- $ kill -TERM 5243 00:04:41.563 22:21:37 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.563 22:21:37 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.563 22:21:37 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.822 22:21:37 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.822 22:21:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.822 22:21:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.822 22:21:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.822 22:21:37 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.822 22:21:37 -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.822 22:21:37 -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.822 22:21:37 -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.822 22:21:37 -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.822 22:21:37 -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.822 22:21:37 -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.822 22:21:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.822 22:21:37 -- scripts/common.sh@344 -- # case "$op" in 00:04:41.822 22:21:37 -- scripts/common.sh@345 -- # : 1 00:04:41.822 22:21:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.822 22:21:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.822 22:21:37 -- scripts/common.sh@365 -- # decimal 1 00:04:41.822 22:21:37 -- scripts/common.sh@353 -- # local d=1 00:04:41.822 22:21:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.822 22:21:37 -- scripts/common.sh@355 -- # echo 1 00:04:41.822 22:21:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.822 22:21:37 -- scripts/common.sh@366 -- # decimal 2 00:04:41.822 22:21:37 -- scripts/common.sh@353 -- # local d=2 00:04:41.822 22:21:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.822 22:21:37 -- scripts/common.sh@355 -- # echo 2 00:04:41.822 22:21:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.822 22:21:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.822 22:21:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.822 22:21:37 -- scripts/common.sh@368 -- # return 0 00:04:41.822 22:21:37 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.822 22:21:37 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.822 --rc genhtml_branch_coverage=1 00:04:41.822 --rc genhtml_function_coverage=1 00:04:41.822 --rc genhtml_legend=1 00:04:41.822 --rc geninfo_all_blocks=1 00:04:41.822 --rc geninfo_unexecuted_blocks=1 00:04:41.822 00:04:41.822 ' 00:04:41.822 22:21:37 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.822 --rc genhtml_branch_coverage=1 00:04:41.822 --rc genhtml_function_coverage=1 00:04:41.822 --rc genhtml_legend=1 00:04:41.822 --rc geninfo_all_blocks=1 00:04:41.822 --rc geninfo_unexecuted_blocks=1 00:04:41.822 00:04:41.822 ' 00:04:41.822 22:21:37 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.822 --rc genhtml_branch_coverage=1 00:04:41.822 --rc genhtml_function_coverage=1 00:04:41.822 --rc genhtml_legend=1 00:04:41.822 --rc geninfo_all_blocks=1 00:04:41.822 --rc geninfo_unexecuted_blocks=1 00:04:41.822 00:04:41.822 ' 00:04:41.822 22:21:37 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.822 --rc genhtml_branch_coverage=1 00:04:41.822 --rc genhtml_function_coverage=1 00:04:41.822 --rc genhtml_legend=1 00:04:41.822 --rc geninfo_all_blocks=1 00:04:41.822 --rc geninfo_unexecuted_blocks=1 00:04:41.822 00:04:41.822 ' 00:04:41.822 22:21:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.822 22:21:37 -- nvmf/common.sh@7 -- # uname -s 00:04:41.822 22:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.822 22:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.822 22:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.822 22:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.822 22:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.822 22:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.822 22:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.822 22:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.822 22:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.822 22:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.823 22:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0088fc5-d467-4219-97da-2837b0f3aecb 00:04:41.823 22:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=e0088fc5-d467-4219-97da-2837b0f3aecb 00:04:41.823 22:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.823 22:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.823 22:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.823 22:21:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.823 22:21:37 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.823 22:21:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.823 22:21:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.823 22:21:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.823 22:21:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.823 22:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.823 22:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.823 22:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.823 22:21:37 -- paths/export.sh@5 -- # export PATH 00:04:41.823 22:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.823 22:21:37 -- nvmf/common.sh@51 -- # : 0 00:04:41.823 22:21:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.823 22:21:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.823 22:21:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.823 22:21:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.823 22:21:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.823 22:21:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.823 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.823 22:21:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.823 22:21:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.823 22:21:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.823 22:21:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.823 22:21:37 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.823 22:21:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.823 22:21:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.823 22:21:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.823 22:21:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.823 22:21:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.823 22:21:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.823 22:21:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.823 22:21:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.823 22:21:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.823 22:21:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54206 00:04:41.823 22:21:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:41.823 22:21:37 -- pm/common@17 -- # local monitor 00:04:41.823 22:21:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.823 22:21:37 -- pm/common@21 -- # date +%s 00:04:41.823 22:21:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.823 22:21:37 -- pm/common@25 -- # sleep 1 00:04:41.823 22:21:37 -- pm/common@21 -- # date +%s 00:04:41.823 22:21:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727475697 00:04:41.823 22:21:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727475697 00:04:41.823 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727475697_collect-cpu-load.pm.log 00:04:41.823 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727475697_collect-vmstat.pm.log 00:04:42.759 22:21:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:42.759 22:21:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:42.759 22:21:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.759 22:21:38 -- common/autotest_common.sh@10 -- # set +x 00:04:42.759 22:21:38 -- spdk/autotest.sh@59 -- # create_test_list 00:04:42.759 22:21:38 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:42.759 22:21:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.018 22:21:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:43.018 22:21:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:43.018 22:21:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:43.018 22:21:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:43.018 22:21:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:43.018 22:21:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.018 22:21:38 -- common/autotest_common.sh@1455 -- # uname 00:04:43.018 22:21:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:43.018 22:21:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.018 22:21:38 -- common/autotest_common.sh@1475 -- # uname 00:04:43.018 22:21:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:43.018 22:21:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.018 22:21:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.018 lcov: LCOV version 1.15 00:04:43.018 22:21:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:57.924 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:57.924 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:16.052 22:22:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:16.052 22:22:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.052 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.052 22:22:09 -- spdk/autotest.sh@78 -- # rm -f 00:05:16.052 22:22:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.052 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:16.052 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:16.053 22:22:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:16.053 22:22:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:16.053 22:22:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:16.053 22:22:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:16.053 22:22:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:16.053 22:22:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:16.053 22:22:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:16.053 22:22:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:16.053 22:22:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:16.053 22:22:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:16.053 22:22:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:16.053 22:22:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:16.053 22:22:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:16.053 22:22:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:16.053 22:22:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:16.053 22:22:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:16.053 22:22:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:16.053 22:22:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:16.053 22:22:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:16.053 22:22:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.053 22:22:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:16.053 22:22:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:16.053 22:22:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:16.053 22:22:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:16.053 No valid GPT data, bailing 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # pt= 00:05:16.053 22:22:10 -- scripts/common.sh@395 -- # return 1 00:05:16.053 22:22:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:16.053 1+0 records in 00:05:16.053 1+0 records out 00:05:16.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430578 s, 244 MB/s 00:05:16.053 22:22:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.053 22:22:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:16.053 22:22:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:16.053 22:22:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:16.053 22:22:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:16.053 No valid GPT data, bailing 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # pt= 00:05:16.053 22:22:10 -- scripts/common.sh@395 -- # return 1 00:05:16.053 22:22:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:16.053 1+0 records in 00:05:16.053 1+0 records out 00:05:16.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558312 s, 188 MB/s 00:05:16.053 22:22:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.053 22:22:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:16.053 22:22:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:16.053 22:22:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:16.053 22:22:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:16.053 No valid GPT data, bailing 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # pt= 00:05:16.053 22:22:10 -- scripts/common.sh@395 -- # return 1 00:05:16.053 22:22:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:16.053 1+0 records in 00:05:16.053 1+0 records out 00:05:16.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633146 s, 166 MB/s 00:05:16.053 22:22:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.053 22:22:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:16.053 22:22:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:16.053 22:22:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:16.053 22:22:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:16.053 No valid GPT data, bailing 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:16.053 22:22:10 -- scripts/common.sh@394 -- # pt= 00:05:16.053 22:22:10 -- scripts/common.sh@395 -- # return 1 00:05:16.053 22:22:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:16.053 1+0 records in 00:05:16.053 1+0 records out 00:05:16.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607838 s, 173 MB/s 00:05:16.053 22:22:10 -- spdk/autotest.sh@105 -- # sync 00:05:16.053 22:22:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:16.053 22:22:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:16.053 22:22:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:18.002 22:22:13 -- spdk/autotest.sh@111 -- # uname -s 00:05:18.002 22:22:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:18.002 22:22:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:18.002 22:22:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:18.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.569 Hugepages 00:05:18.569 node hugesize free / total 00:05:18.569 node0 1048576kB 0 / 0 00:05:18.569 node0 2048kB 0 / 0 00:05:18.569 00:05:18.569 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.569 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:18.828 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:18.828 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:18.828 22:22:14 -- spdk/autotest.sh@117 -- # uname -s 00:05:18.828 22:22:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:18.828 22:22:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:18.828 22:22:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.765 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.024 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.024 22:22:15 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:20.962 22:22:16 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:20.962 22:22:16 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:20.962 22:22:16 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.962 22:22:16 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:20.962 22:22:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:20.962 22:22:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:20.962 22:22:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.962 22:22:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.962 22:22:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:20.962 22:22:16 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:20.962 22:22:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:20.962 22:22:16 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.528 Waiting for block devices as requested 00:05:21.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.788 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.788 22:22:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:21.788 22:22:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:21.788 22:22:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:21.788 22:22:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:21.788 22:22:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:21.788 22:22:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:21.788 22:22:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:21.788 22:22:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:21.788 22:22:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:21.788 22:22:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:21.788 22:22:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:21.788 22:22:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:21.788 22:22:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:22.047 22:22:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:22.047 22:22:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:22.047 22:22:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:22.047 22:22:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:22.047 22:22:17 -- common/autotest_common.sh@1541 -- # continue 00:05:22.047 22:22:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:22.047 22:22:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:22.047 22:22:17 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:22.047 22:22:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:22.047 22:22:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:22.047 22:22:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:22.047 22:22:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:22.047 22:22:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:22.047 22:22:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:22.047 22:22:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:22.047 22:22:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:22.047 22:22:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:22.047 22:22:17 -- common/autotest_common.sh@1541 -- # continue 00:05:22.047 22:22:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:22.047 22:22:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.047 22:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.047 22:22:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:22.047 22:22:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.047 22:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.047 22:22:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.986 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.986 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.250 22:22:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:23.250 22:22:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.250 22:22:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.250 22:22:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:23.250 22:22:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:23.250 22:22:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.250 22:22:18 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:23.250 22:22:18 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:23.250 22:22:18 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:23.250 22:22:18 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:23.250 22:22:18 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:23.250 22:22:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:23.250 22:22:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:23.251 22:22:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.251 22:22:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.251 22:22:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:23.251 22:22:19 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:23.251 22:22:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:23.251 22:22:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:23.251 22:22:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:23.251 22:22:19 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:23.251 22:22:19 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.251 22:22:19 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:23.251 22:22:19 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:23.251 22:22:19 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:23.251 22:22:19 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:23.251 22:22:19 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:23.251 22:22:19 -- common/autotest_common.sh@1570 -- # return 0 00:05:23.251 22:22:19 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:23.251 22:22:19 -- common/autotest_common.sh@1578 -- # return 0 00:05:23.251 22:22:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:23.251 22:22:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:23.251 22:22:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.251 22:22:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:23.251 22:22:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:23.251 22:22:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.251 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.251 22:22:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:23.251 22:22:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.251 22:22:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.251 22:22:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.251 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.251 ************************************ 00:05:23.251 START TEST env 00:05:23.251 ************************************ 00:05:23.251 22:22:19 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:23.510 * Looking for test storage... 00:05:23.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.510 22:22:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.510 22:22:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.510 22:22:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.510 22:22:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.510 22:22:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.510 22:22:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.510 22:22:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.510 22:22:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.510 22:22:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.510 22:22:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.510 22:22:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.510 22:22:19 env -- scripts/common.sh@344 -- # case "$op" in 00:05:23.510 22:22:19 env -- scripts/common.sh@345 -- # : 1 00:05:23.510 22:22:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.510 22:22:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.510 22:22:19 env -- scripts/common.sh@365 -- # decimal 1 00:05:23.510 22:22:19 env -- scripts/common.sh@353 -- # local d=1 00:05:23.510 22:22:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.510 22:22:19 env -- scripts/common.sh@355 -- # echo 1 00:05:23.510 22:22:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.510 22:22:19 env -- scripts/common.sh@366 -- # decimal 2 00:05:23.510 22:22:19 env -- scripts/common.sh@353 -- # local d=2 00:05:23.510 22:22:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.510 22:22:19 env -- scripts/common.sh@355 -- # echo 2 00:05:23.510 22:22:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.510 22:22:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.510 22:22:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.510 22:22:19 env -- scripts/common.sh@368 -- # return 0 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.510 --rc genhtml_branch_coverage=1 00:05:23.510 --rc genhtml_function_coverage=1 00:05:23.510 --rc genhtml_legend=1 00:05:23.510 --rc geninfo_all_blocks=1 00:05:23.510 --rc geninfo_unexecuted_blocks=1 00:05:23.510 00:05:23.510 ' 00:05:23.510 22:22:19 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.510 --rc genhtml_branch_coverage=1 00:05:23.511 --rc genhtml_function_coverage=1 00:05:23.511 --rc genhtml_legend=1 00:05:23.511 --rc geninfo_all_blocks=1 00:05:23.511 --rc geninfo_unexecuted_blocks=1 00:05:23.511 00:05:23.511 ' 00:05:23.511 22:22:19 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.511 --rc genhtml_branch_coverage=1 00:05:23.511 --rc genhtml_function_coverage=1 00:05:23.511 --rc genhtml_legend=1 00:05:23.511 --rc geninfo_all_blocks=1 00:05:23.511 --rc geninfo_unexecuted_blocks=1 00:05:23.511 00:05:23.511 ' 00:05:23.511 22:22:19 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.511 --rc genhtml_branch_coverage=1 00:05:23.511 --rc genhtml_function_coverage=1 00:05:23.511 --rc genhtml_legend=1 00:05:23.511 --rc geninfo_all_blocks=1 00:05:23.511 --rc geninfo_unexecuted_blocks=1 00:05:23.511 00:05:23.511 ' 00:05:23.511 22:22:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:23.511 22:22:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.511 22:22:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.511 22:22:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.511 ************************************ 00:05:23.511 START TEST env_memory 00:05:23.511 ************************************ 00:05:23.511 22:22:19 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:23.511 00:05:23.511 00:05:23.511 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.511 http://cunit.sourceforge.net/ 00:05:23.511 00:05:23.511 00:05:23.511 Suite: memory 00:05:23.511 Test: alloc and free memory map ...[2024-09-27 22:22:19.378571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.770 passed 00:05:23.770 Test: mem map translation ...[2024-09-27 22:22:19.424302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.770 [2024-09-27 22:22:19.424488] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.770 [2024-09-27 22:22:19.424641] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.770 [2024-09-27 22:22:19.424706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.770 passed 00:05:23.770 Test: mem map registration ...[2024-09-27 22:22:19.495211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:23.770 [2024-09-27 22:22:19.495423] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:23.770 passed 00:05:23.770 Test: mem map adjacent registrations ...passed 00:05:23.770 00:05:23.770 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.770 suites 1 1 n/a 0 0 00:05:23.770 tests 4 4 4 0 0 00:05:23.770 asserts 152 152 152 0 n/a 00:05:23.770 00:05:23.770 Elapsed time = 0.263 seconds 00:05:23.770 ************************************ 00:05:23.770 END TEST env_memory 00:05:23.770 ************************************ 00:05:23.770 00:05:23.770 real 0m0.318s 00:05:23.770 user 0m0.274s 00:05:23.770 sys 0m0.032s 00:05:23.770 22:22:19 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.770 22:22:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.029 22:22:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.029 22:22:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.029 22:22:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.029 22:22:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.029 ************************************ 00:05:24.029 START TEST env_vtophys 00:05:24.029 ************************************ 00:05:24.029 22:22:19 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:24.029 EAL: lib.eal log level changed from notice to debug 00:05:24.029 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.029 EAL: Detected lcore 1 as core 0 on socket 0 00:05:24.029 EAL: Detected lcore 2 as core 0 on socket 0 00:05:24.029 EAL: Detected lcore 3 as core 0 on socket 0 00:05:24.030 EAL: Detected lcore 4 as core 0 on socket 0 00:05:24.030 EAL: Detected lcore 5 as core 0 on socket 0 00:05:24.030 EAL: Detected lcore 6 as core 0 on socket 0 00:05:24.030 EAL: Detected lcore 7 as core 0 on socket 0 00:05:24.030 EAL: Detected lcore 8 as core 0 on socket 0 00:05:24.030 EAL: Detected lcore 9 as core 0 on socket 0 00:05:24.030 EAL: Maximum logical cores by configuration: 128 00:05:24.030 EAL: Detected CPU lcores: 10 00:05:24.030 EAL: Detected NUMA nodes: 1 00:05:24.030 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:24.030 EAL: Detected shared linkage of DPDK 00:05:24.030 EAL: No shared files mode enabled, IPC will be disabled 00:05:24.030 EAL: Selected IOVA mode 'PA' 00:05:24.030 EAL: Probing VFIO support... 00:05:24.030 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.030 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:24.030 EAL: Ask a virtual area of 0x2e000 bytes 00:05:24.030 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:24.030 EAL: Setting up physically contiguous memory... 00:05:24.030 EAL: Setting maximum number of open files to 524288 00:05:24.030 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:24.030 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:24.030 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.030 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:24.030 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.030 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.030 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:24.030 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:24.030 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.030 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:24.030 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.030 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.030 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:24.030 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:24.030 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.030 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:24.030 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.030 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.030 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:24.030 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:24.030 EAL: Ask a virtual area of 0x61000 bytes 00:05:24.030 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:24.030 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:24.030 EAL: Ask a virtual area of 0x400000000 bytes 00:05:24.030 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:24.030 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:24.030 EAL: Hugepages will be freed exactly as allocated. 00:05:24.030 EAL: No shared files mode enabled, IPC is disabled 00:05:24.030 EAL: No shared files mode enabled, IPC is disabled 00:05:24.030 EAL: TSC frequency is ~2490000 KHz 00:05:24.030 EAL: Main lcore 0 is ready (tid=7f33ed0d4a40;cpuset=[0]) 00:05:24.030 EAL: Trying to obtain current memory policy. 00:05:24.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.030 EAL: Restoring previous memory policy: 0 00:05:24.030 EAL: request: mp_malloc_sync 00:05:24.030 EAL: No shared files mode enabled, IPC is disabled 00:05:24.030 EAL: Heap on socket 0 was expanded by 2MB 00:05:24.030 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:24.030 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:24.289 EAL: Mem event callback 'spdk:(nil)' registered 00:05:24.289 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:24.289 00:05:24.289 00:05:24.289 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.289 http://cunit.sourceforge.net/ 00:05:24.289 00:05:24.289 00:05:24.289 Suite: components_suite 00:05:24.548 Test: vtophys_malloc_test ...passed 00:05:24.548 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:24.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.549 EAL: Restoring previous memory policy: 4 00:05:24.549 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.549 EAL: request: mp_malloc_sync 00:05:24.549 EAL: No shared files mode enabled, IPC is disabled 00:05:24.549 EAL: Heap on socket 0 was expanded by 4MB 00:05:24.549 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.549 EAL: request: mp_malloc_sync 00:05:24.549 EAL: No shared files mode enabled, IPC is disabled 00:05:24.549 EAL: Heap on socket 0 was shrunk by 4MB 00:05:24.549 EAL: Trying to obtain current memory policy. 00:05:24.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.549 EAL: Restoring previous memory policy: 4 00:05:24.549 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.549 EAL: request: mp_malloc_sync 00:05:24.549 EAL: No shared files mode enabled, IPC is disabled 00:05:24.549 EAL: Heap on socket 0 was expanded by 6MB 00:05:24.549 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.549 EAL: request: mp_malloc_sync 00:05:24.549 EAL: No shared files mode enabled, IPC is disabled 00:05:24.549 EAL: Heap on socket 0 was shrunk by 6MB 00:05:24.549 EAL: Trying to obtain current memory policy. 00:05:24.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.549 EAL: Restoring previous memory policy: 4 00:05:24.549 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.549 EAL: request: mp_malloc_sync 00:05:24.549 EAL: No shared files mode enabled, IPC is disabled 00:05:24.549 EAL: Heap on socket 0 was expanded by 10MB 00:05:24.549 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.549 EAL: request: mp_malloc_sync 00:05:24.549 EAL: No shared files mode enabled, IPC is disabled 00:05:24.549 EAL: Heap on socket 0 was shrunk by 10MB 00:05:24.808 EAL: Trying to obtain current memory policy. 00:05:24.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.808 EAL: Restoring previous memory policy: 4 00:05:24.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.808 EAL: request: mp_malloc_sync 00:05:24.808 EAL: No shared files mode enabled, IPC is disabled 00:05:24.808 EAL: Heap on socket 0 was expanded by 18MB 00:05:24.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.808 EAL: request: mp_malloc_sync 00:05:24.808 EAL: No shared files mode enabled, IPC is disabled 00:05:24.808 EAL: Heap on socket 0 was shrunk by 18MB 00:05:24.808 EAL: Trying to obtain current memory policy. 00:05:24.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.808 EAL: Restoring previous memory policy: 4 00:05:24.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.808 EAL: request: mp_malloc_sync 00:05:24.808 EAL: No shared files mode enabled, IPC is disabled 00:05:24.808 EAL: Heap on socket 0 was expanded by 34MB 00:05:24.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.808 EAL: request: mp_malloc_sync 00:05:24.808 EAL: No shared files mode enabled, IPC is disabled 00:05:24.808 EAL: Heap on socket 0 was shrunk by 34MB 00:05:24.808 EAL: Trying to obtain current memory policy. 00:05:24.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.808 EAL: Restoring previous memory policy: 4 00:05:24.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.808 EAL: request: mp_malloc_sync 00:05:24.808 EAL: No shared files mode enabled, IPC is disabled 00:05:24.808 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.068 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.068 EAL: request: mp_malloc_sync 00:05:25.068 EAL: No shared files mode enabled, IPC is disabled 00:05:25.068 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.068 EAL: Trying to obtain current memory policy. 00:05:25.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.068 EAL: Restoring previous memory policy: 4 00:05:25.068 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.068 EAL: request: mp_malloc_sync 00:05:25.068 EAL: No shared files mode enabled, IPC is disabled 00:05:25.068 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.585 EAL: request: mp_malloc_sync 00:05:25.585 EAL: No shared files mode enabled, IPC is disabled 00:05:25.585 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.585 EAL: Trying to obtain current memory policy. 00:05:25.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.585 EAL: Restoring previous memory policy: 4 00:05:25.585 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.585 EAL: request: mp_malloc_sync 00:05:25.585 EAL: No shared files mode enabled, IPC is disabled 00:05:25.585 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.152 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.410 EAL: request: mp_malloc_sync 00:05:26.410 EAL: No shared files mode enabled, IPC is disabled 00:05:26.410 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.669 EAL: Trying to obtain current memory policy. 00:05:26.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.669 EAL: Restoring previous memory policy: 4 00:05:26.669 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.669 EAL: request: mp_malloc_sync 00:05:26.669 EAL: No shared files mode enabled, IPC is disabled 00:05:26.669 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.047 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.047 EAL: request: mp_malloc_sync 00:05:28.047 EAL: No shared files mode enabled, IPC is disabled 00:05:28.047 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.616 EAL: Trying to obtain current memory policy. 00:05:28.616 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.876 EAL: Restoring previous memory policy: 4 00:05:28.876 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.876 EAL: request: mp_malloc_sync 00:05:28.876 EAL: No shared files mode enabled, IPC is disabled 00:05:28.876 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.453 EAL: request: mp_malloc_sync 00:05:31.453 EAL: No shared files mode enabled, IPC is disabled 00:05:31.453 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.834 passed 00:05:32.834 00:05:32.834 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.834 suites 1 1 n/a 0 0 00:05:32.834 tests 2 2 2 0 0 00:05:32.834 asserts 5768 5768 5768 0 n/a 00:05:32.834 00:05:32.834 Elapsed time = 8.551 seconds 00:05:32.834 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.834 EAL: request: mp_malloc_sync 00:05:32.834 EAL: No shared files mode enabled, IPC is disabled 00:05:32.834 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.834 EAL: No shared files mode enabled, IPC is disabled 00:05:32.834 EAL: No shared files mode enabled, IPC is disabled 00:05:32.834 EAL: No shared files mode enabled, IPC is disabled 00:05:32.834 00:05:32.834 real 0m8.885s 00:05:32.834 user 0m7.858s 00:05:32.834 sys 0m0.865s 00:05:32.834 ************************************ 00:05:32.834 END TEST env_vtophys 00:05:32.834 ************************************ 00:05:32.834 22:22:28 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.834 22:22:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.834 22:22:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.834 22:22:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.834 22:22:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.834 22:22:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.834 ************************************ 00:05:32.834 START TEST env_pci 00:05:32.834 ************************************ 00:05:32.834 22:22:28 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.834 00:05:32.834 00:05:32.834 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.834 http://cunit.sourceforge.net/ 00:05:32.834 00:05:32.834 00:05:32.834 Suite: pci 00:05:32.834 Test: pci_hook ...[2024-09-27 22:22:28.696075] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56559 has claimed it 00:05:33.093 passed 00:05:33.093 00:05:33.093 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.093 suites 1 1 n/a 0 0 00:05:33.093 tests 1 1 1 0 0 00:05:33.093 asserts 25 25 25 0 n/a 00:05:33.093 00:05:33.093 Elapsed time = 0.009 seconds 00:05:33.093 EAL: Cannot find device (10000:00:01.0) 00:05:33.093 EAL: Failed to attach device on primary process 00:05:33.093 00:05:33.093 real 0m0.117s 00:05:33.093 user 0m0.042s 00:05:33.093 sys 0m0.074s 00:05:33.093 22:22:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.093 22:22:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:33.093 ************************************ 00:05:33.093 END TEST env_pci 00:05:33.093 ************************************ 00:05:33.093 22:22:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:33.093 22:22:28 env -- env/env.sh@15 -- # uname 00:05:33.093 22:22:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:33.093 22:22:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:33.093 22:22:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.093 22:22:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:33.093 22:22:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.093 22:22:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.093 ************************************ 00:05:33.093 START TEST env_dpdk_post_init 00:05:33.093 ************************************ 00:05:33.093 22:22:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.093 EAL: Detected CPU lcores: 10 00:05:33.093 EAL: Detected NUMA nodes: 1 00:05:33.093 EAL: Detected shared linkage of DPDK 00:05:33.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.093 EAL: Selected IOVA mode 'PA' 00:05:33.352 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.352 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:33.352 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:33.352 Starting DPDK initialization... 00:05:33.352 Starting SPDK post initialization... 00:05:33.352 SPDK NVMe probe 00:05:33.352 Attaching to 0000:00:10.0 00:05:33.352 Attaching to 0000:00:11.0 00:05:33.352 Attached to 0000:00:10.0 00:05:33.352 Attached to 0000:00:11.0 00:05:33.352 Cleaning up... 00:05:33.352 00:05:33.352 real 0m0.290s 00:05:33.352 user 0m0.093s 00:05:33.352 sys 0m0.097s 00:05:33.352 22:22:29 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.352 ************************************ 00:05:33.352 END TEST env_dpdk_post_init 00:05:33.352 ************************************ 00:05:33.352 22:22:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.352 22:22:29 env -- env/env.sh@26 -- # uname 00:05:33.352 22:22:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.352 22:22:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.352 22:22:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.352 22:22:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.352 22:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.352 ************************************ 00:05:33.352 START TEST env_mem_callbacks 00:05:33.352 ************************************ 00:05:33.352 22:22:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.611 EAL: Detected CPU lcores: 10 00:05:33.611 EAL: Detected NUMA nodes: 1 00:05:33.611 EAL: Detected shared linkage of DPDK 00:05:33.611 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.611 EAL: Selected IOVA mode 'PA' 00:05:33.611 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.611 00:05:33.611 00:05:33.611 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.611 http://cunit.sourceforge.net/ 00:05:33.611 00:05:33.611 00:05:33.611 Suite: memory 00:05:33.611 Test: test ... 00:05:33.611 register 0x200000200000 2097152 00:05:33.611 malloc 3145728 00:05:33.611 register 0x200000400000 4194304 00:05:33.611 buf 0x2000004fffc0 len 3145728 PASSED 00:05:33.611 malloc 64 00:05:33.611 buf 0x2000004ffec0 len 64 PASSED 00:05:33.611 malloc 4194304 00:05:33.611 register 0x200000800000 6291456 00:05:33.611 buf 0x2000009fffc0 len 4194304 PASSED 00:05:33.611 free 0x2000004fffc0 3145728 00:05:33.611 free 0x2000004ffec0 64 00:05:33.611 unregister 0x200000400000 4194304 PASSED 00:05:33.611 free 0x2000009fffc0 4194304 00:05:33.611 unregister 0x200000800000 6291456 PASSED 00:05:33.611 malloc 8388608 00:05:33.611 register 0x200000400000 10485760 00:05:33.611 buf 0x2000005fffc0 len 8388608 PASSED 00:05:33.611 free 0x2000005fffc0 8388608 00:05:33.611 unregister 0x200000400000 10485760 PASSED 00:05:33.870 passed 00:05:33.870 00:05:33.870 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.870 suites 1 1 n/a 0 0 00:05:33.870 tests 1 1 1 0 0 00:05:33.870 asserts 15 15 15 0 n/a 00:05:33.870 00:05:33.870 Elapsed time = 0.083 seconds 00:05:33.870 00:05:33.870 real 0m0.296s 00:05:33.870 user 0m0.111s 00:05:33.870 sys 0m0.082s 00:05:33.870 ************************************ 00:05:33.870 END TEST env_mem_callbacks 00:05:33.870 ************************************ 00:05:33.870 22:22:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.870 22:22:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.870 00:05:33.870 real 0m10.512s 00:05:33.870 user 0m8.625s 00:05:33.870 sys 0m1.513s 00:05:33.870 22:22:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.870 22:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.870 ************************************ 00:05:33.870 END TEST env 00:05:33.870 ************************************ 00:05:33.870 22:22:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.870 22:22:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.870 22:22:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.870 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:33.870 ************************************ 00:05:33.870 START TEST rpc 00:05:33.870 ************************************ 00:05:33.870 22:22:29 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.129 * Looking for test storage... 00:05:34.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.129 22:22:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.129 22:22:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.129 22:22:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.129 22:22:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.129 22:22:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.129 22:22:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.129 22:22:29 rpc -- scripts/common.sh@345 -- # : 1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.129 22:22:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.129 22:22:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.129 22:22:29 rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.129 22:22:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.129 22:22:29 rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.129 22:22:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.129 22:22:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.129 22:22:29 rpc -- scripts/common.sh@368 -- # return 0 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.129 --rc genhtml_branch_coverage=1 00:05:34.129 --rc genhtml_function_coverage=1 00:05:34.129 --rc genhtml_legend=1 00:05:34.129 --rc geninfo_all_blocks=1 00:05:34.129 --rc geninfo_unexecuted_blocks=1 00:05:34.129 00:05:34.129 ' 00:05:34.129 22:22:29 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.129 --rc genhtml_branch_coverage=1 00:05:34.129 --rc genhtml_function_coverage=1 00:05:34.129 --rc genhtml_legend=1 00:05:34.129 --rc geninfo_all_blocks=1 00:05:34.130 --rc geninfo_unexecuted_blocks=1 00:05:34.130 00:05:34.130 ' 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.130 --rc genhtml_branch_coverage=1 00:05:34.130 --rc genhtml_function_coverage=1 00:05:34.130 --rc genhtml_legend=1 00:05:34.130 --rc geninfo_all_blocks=1 00:05:34.130 --rc geninfo_unexecuted_blocks=1 00:05:34.130 00:05:34.130 ' 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.130 --rc genhtml_branch_coverage=1 00:05:34.130 --rc genhtml_function_coverage=1 00:05:34.130 --rc genhtml_legend=1 00:05:34.130 --rc geninfo_all_blocks=1 00:05:34.130 --rc geninfo_unexecuted_blocks=1 00:05:34.130 00:05:34.130 ' 00:05:34.130 22:22:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56691 00:05:34.130 22:22:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.130 22:22:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:34.130 22:22:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56691 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 56691 ']' 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.130 22:22:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.130 [2024-09-27 22:22:29.984810] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:05:34.130 [2024-09-27 22:22:29.985348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56691 ] 00:05:34.388 [2024-09-27 22:22:30.144087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.694 [2024-09-27 22:22:30.385319] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.694 [2024-09-27 22:22:30.385563] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56691' to capture a snapshot of events at runtime. 00:05:34.694 [2024-09-27 22:22:30.385587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.694 [2024-09-27 22:22:30.385601] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.694 [2024-09-27 22:22:30.385612] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56691 for offline analysis/debug. 00:05:34.694 [2024-09-27 22:22:30.385667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.112 22:22:31 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.112 22:22:31 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.112 22:22:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.112 22:22:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.112 22:22:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.112 22:22:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.112 22:22:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.112 22:22:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.112 22:22:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.113 ************************************ 00:05:36.113 START TEST rpc_integrity 00:05:36.113 ************************************ 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.113 { 00:05:36.113 "name": "Malloc0", 00:05:36.113 "aliases": [ 00:05:36.113 "f8ec0df1-d78b-4eb0-9440-03904e8fc4b9" 00:05:36.113 ], 00:05:36.113 "product_name": "Malloc disk", 00:05:36.113 "block_size": 512, 00:05:36.113 "num_blocks": 16384, 00:05:36.113 "uuid": "f8ec0df1-d78b-4eb0-9440-03904e8fc4b9", 00:05:36.113 "assigned_rate_limits": { 00:05:36.113 "rw_ios_per_sec": 0, 00:05:36.113 "rw_mbytes_per_sec": 0, 00:05:36.113 "r_mbytes_per_sec": 0, 00:05:36.113 "w_mbytes_per_sec": 0 00:05:36.113 }, 00:05:36.113 "claimed": false, 00:05:36.113 "zoned": false, 00:05:36.113 "supported_io_types": { 00:05:36.113 "read": true, 00:05:36.113 "write": true, 00:05:36.113 "unmap": true, 00:05:36.113 "flush": true, 00:05:36.113 "reset": true, 00:05:36.113 "nvme_admin": false, 00:05:36.113 "nvme_io": false, 00:05:36.113 "nvme_io_md": false, 00:05:36.113 "write_zeroes": true, 00:05:36.113 "zcopy": true, 00:05:36.113 "get_zone_info": false, 00:05:36.113 "zone_management": false, 00:05:36.113 "zone_append": false, 00:05:36.113 "compare": false, 00:05:36.113 "compare_and_write": false, 00:05:36.113 "abort": true, 00:05:36.113 "seek_hole": false, 00:05:36.113 "seek_data": false, 00:05:36.113 "copy": true, 00:05:36.113 "nvme_iov_md": false 00:05:36.113 }, 00:05:36.113 "memory_domains": [ 00:05:36.113 { 00:05:36.113 "dma_device_id": "system", 00:05:36.113 "dma_device_type": 1 00:05:36.113 }, 00:05:36.113 { 00:05:36.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.113 "dma_device_type": 2 00:05:36.113 } 00:05:36.113 ], 00:05:36.113 "driver_specific": {} 00:05:36.113 } 00:05:36.113 ]' 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.113 [2024-09-27 22:22:31.802525] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:36.113 [2024-09-27 22:22:31.802607] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.113 [2024-09-27 22:22:31.802640] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:36.113 [2024-09-27 22:22:31.802657] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.113 [2024-09-27 22:22:31.805385] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.113 [2024-09-27 22:22:31.805571] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.113 Passthru0 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.113 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.113 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.113 { 00:05:36.113 "name": "Malloc0", 00:05:36.113 "aliases": [ 00:05:36.113 "f8ec0df1-d78b-4eb0-9440-03904e8fc4b9" 00:05:36.113 ], 00:05:36.113 "product_name": "Malloc disk", 00:05:36.113 "block_size": 512, 00:05:36.113 "num_blocks": 16384, 00:05:36.113 "uuid": "f8ec0df1-d78b-4eb0-9440-03904e8fc4b9", 00:05:36.113 "assigned_rate_limits": { 00:05:36.113 "rw_ios_per_sec": 0, 00:05:36.113 "rw_mbytes_per_sec": 0, 00:05:36.113 "r_mbytes_per_sec": 0, 00:05:36.113 "w_mbytes_per_sec": 0 00:05:36.113 }, 00:05:36.113 "claimed": true, 00:05:36.113 "claim_type": "exclusive_write", 00:05:36.113 "zoned": false, 00:05:36.113 "supported_io_types": { 00:05:36.113 "read": true, 00:05:36.113 "write": true, 00:05:36.113 "unmap": true, 00:05:36.113 "flush": true, 00:05:36.113 "reset": true, 00:05:36.113 "nvme_admin": false, 00:05:36.113 "nvme_io": false, 00:05:36.113 "nvme_io_md": false, 00:05:36.113 "write_zeroes": true, 00:05:36.113 "zcopy": true, 00:05:36.113 "get_zone_info": false, 00:05:36.113 "zone_management": false, 00:05:36.113 "zone_append": false, 00:05:36.113 "compare": false, 00:05:36.113 "compare_and_write": false, 00:05:36.113 "abort": true, 00:05:36.113 "seek_hole": false, 00:05:36.113 "seek_data": false, 00:05:36.113 "copy": true, 00:05:36.113 "nvme_iov_md": false 00:05:36.113 }, 00:05:36.113 "memory_domains": [ 00:05:36.113 { 00:05:36.113 "dma_device_id": "system", 00:05:36.113 "dma_device_type": 1 00:05:36.113 }, 00:05:36.113 { 00:05:36.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.113 "dma_device_type": 2 00:05:36.113 } 00:05:36.113 ], 00:05:36.113 "driver_specific": {} 00:05:36.113 }, 00:05:36.113 { 00:05:36.113 "name": "Passthru0", 00:05:36.113 "aliases": [ 00:05:36.113 "58b2138c-b70c-5843-90b7-cc5ff58b63fa" 00:05:36.113 ], 00:05:36.113 "product_name": "passthru", 00:05:36.113 "block_size": 512, 00:05:36.113 "num_blocks": 16384, 00:05:36.113 "uuid": "58b2138c-b70c-5843-90b7-cc5ff58b63fa", 00:05:36.113 "assigned_rate_limits": { 00:05:36.113 "rw_ios_per_sec": 0, 00:05:36.113 "rw_mbytes_per_sec": 0, 00:05:36.113 "r_mbytes_per_sec": 0, 00:05:36.113 "w_mbytes_per_sec": 0 00:05:36.113 }, 00:05:36.113 "claimed": false, 00:05:36.113 "zoned": false, 00:05:36.113 "supported_io_types": { 00:05:36.113 "read": true, 00:05:36.113 "write": true, 00:05:36.113 "unmap": true, 00:05:36.113 "flush": true, 00:05:36.113 "reset": true, 00:05:36.113 "nvme_admin": false, 00:05:36.113 "nvme_io": false, 00:05:36.113 "nvme_io_md": false, 00:05:36.113 "write_zeroes": true, 00:05:36.113 "zcopy": true, 00:05:36.114 "get_zone_info": false, 00:05:36.114 "zone_management": false, 00:05:36.114 "zone_append": false, 00:05:36.114 "compare": false, 00:05:36.114 "compare_and_write": false, 00:05:36.114 "abort": true, 00:05:36.114 "seek_hole": false, 00:05:36.114 "seek_data": false, 00:05:36.114 "copy": true, 00:05:36.114 "nvme_iov_md": false 00:05:36.114 }, 00:05:36.114 "memory_domains": [ 00:05:36.114 { 00:05:36.114 "dma_device_id": "system", 00:05:36.114 "dma_device_type": 1 00:05:36.114 }, 00:05:36.114 { 00:05:36.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.114 "dma_device_type": 2 00:05:36.114 } 00:05:36.114 ], 00:05:36.114 "driver_specific": { 00:05:36.114 "passthru": { 00:05:36.114 "name": "Passthru0", 00:05:36.114 "base_bdev_name": "Malloc0" 00:05:36.114 } 00:05:36.114 } 00:05:36.114 } 00:05:36.114 ]' 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.114 ************************************ 00:05:36.114 END TEST rpc_integrity 00:05:36.114 ************************************ 00:05:36.114 22:22:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.114 00:05:36.114 real 0m0.346s 00:05:36.114 user 0m0.187s 00:05:36.114 sys 0m0.057s 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.114 22:22:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.373 22:22:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.373 22:22:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.373 22:22:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.373 22:22:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.373 ************************************ 00:05:36.373 START TEST rpc_plugins 00:05:36.373 ************************************ 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:36.373 { 00:05:36.373 "name": "Malloc1", 00:05:36.373 "aliases": [ 00:05:36.373 "52931259-c9ed-4767-b7a2-51db1898c54e" 00:05:36.373 ], 00:05:36.373 "product_name": "Malloc disk", 00:05:36.373 "block_size": 4096, 00:05:36.373 "num_blocks": 256, 00:05:36.373 "uuid": "52931259-c9ed-4767-b7a2-51db1898c54e", 00:05:36.373 "assigned_rate_limits": { 00:05:36.373 "rw_ios_per_sec": 0, 00:05:36.373 "rw_mbytes_per_sec": 0, 00:05:36.373 "r_mbytes_per_sec": 0, 00:05:36.373 "w_mbytes_per_sec": 0 00:05:36.373 }, 00:05:36.373 "claimed": false, 00:05:36.373 "zoned": false, 00:05:36.373 "supported_io_types": { 00:05:36.373 "read": true, 00:05:36.373 "write": true, 00:05:36.373 "unmap": true, 00:05:36.373 "flush": true, 00:05:36.373 "reset": true, 00:05:36.373 "nvme_admin": false, 00:05:36.373 "nvme_io": false, 00:05:36.373 "nvme_io_md": false, 00:05:36.373 "write_zeroes": true, 00:05:36.373 "zcopy": true, 00:05:36.373 "get_zone_info": false, 00:05:36.373 "zone_management": false, 00:05:36.373 "zone_append": false, 00:05:36.373 "compare": false, 00:05:36.373 "compare_and_write": false, 00:05:36.373 "abort": true, 00:05:36.373 "seek_hole": false, 00:05:36.373 "seek_data": false, 00:05:36.373 "copy": true, 00:05:36.373 "nvme_iov_md": false 00:05:36.373 }, 00:05:36.373 "memory_domains": [ 00:05:36.373 { 00:05:36.373 "dma_device_id": "system", 00:05:36.373 "dma_device_type": 1 00:05:36.373 }, 00:05:36.373 { 00:05:36.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.373 "dma_device_type": 2 00:05:36.373 } 00:05:36.373 ], 00:05:36.373 "driver_specific": {} 00:05:36.373 } 00:05:36.373 ]' 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:36.373 ************************************ 00:05:36.373 END TEST rpc_plugins 00:05:36.373 ************************************ 00:05:36.373 22:22:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:36.373 00:05:36.373 real 0m0.168s 00:05:36.373 user 0m0.091s 00:05:36.373 sys 0m0.032s 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.373 22:22:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:36.632 22:22:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:36.632 22:22:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.632 22:22:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.632 22:22:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.632 ************************************ 00:05:36.632 START TEST rpc_trace_cmd_test 00:05:36.632 ************************************ 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:36.632 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56691", 00:05:36.632 "tpoint_group_mask": "0x8", 00:05:36.632 "iscsi_conn": { 00:05:36.632 "mask": "0x2", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "scsi": { 00:05:36.632 "mask": "0x4", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "bdev": { 00:05:36.632 "mask": "0x8", 00:05:36.632 "tpoint_mask": "0xffffffffffffffff" 00:05:36.632 }, 00:05:36.632 "nvmf_rdma": { 00:05:36.632 "mask": "0x10", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "nvmf_tcp": { 00:05:36.632 "mask": "0x20", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "ftl": { 00:05:36.632 "mask": "0x40", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "blobfs": { 00:05:36.632 "mask": "0x80", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "dsa": { 00:05:36.632 "mask": "0x200", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "thread": { 00:05:36.632 "mask": "0x400", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "nvme_pcie": { 00:05:36.632 "mask": "0x800", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "iaa": { 00:05:36.632 "mask": "0x1000", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "nvme_tcp": { 00:05:36.632 "mask": "0x2000", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "bdev_nvme": { 00:05:36.632 "mask": "0x4000", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "sock": { 00:05:36.632 "mask": "0x8000", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "blob": { 00:05:36.632 "mask": "0x10000", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 }, 00:05:36.632 "bdev_raid": { 00:05:36.632 "mask": "0x20000", 00:05:36.632 "tpoint_mask": "0x0" 00:05:36.632 } 00:05:36.632 }' 00:05:36.632 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:36.633 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:36.892 ************************************ 00:05:36.892 END TEST rpc_trace_cmd_test 00:05:36.892 ************************************ 00:05:36.892 22:22:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:36.892 00:05:36.892 real 0m0.211s 00:05:36.892 user 0m0.158s 00:05:36.892 sys 0m0.039s 00:05:36.892 22:22:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.892 22:22:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.892 22:22:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:36.892 22:22:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:36.892 22:22:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:36.892 22:22:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.892 22:22:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.892 22:22:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.892 ************************************ 00:05:36.892 START TEST rpc_daemon_integrity 00:05:36.892 ************************************ 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.892 { 00:05:36.892 "name": "Malloc2", 00:05:36.892 "aliases": [ 00:05:36.892 "8ec5b356-ece5-4ce6-86f8-43b437f10d87" 00:05:36.892 ], 00:05:36.892 "product_name": "Malloc disk", 00:05:36.892 "block_size": 512, 00:05:36.892 "num_blocks": 16384, 00:05:36.892 "uuid": "8ec5b356-ece5-4ce6-86f8-43b437f10d87", 00:05:36.892 "assigned_rate_limits": { 00:05:36.892 "rw_ios_per_sec": 0, 00:05:36.892 "rw_mbytes_per_sec": 0, 00:05:36.892 "r_mbytes_per_sec": 0, 00:05:36.892 "w_mbytes_per_sec": 0 00:05:36.892 }, 00:05:36.892 "claimed": false, 00:05:36.892 "zoned": false, 00:05:36.892 "supported_io_types": { 00:05:36.892 "read": true, 00:05:36.892 "write": true, 00:05:36.892 "unmap": true, 00:05:36.892 "flush": true, 00:05:36.892 "reset": true, 00:05:36.892 "nvme_admin": false, 00:05:36.892 "nvme_io": false, 00:05:36.892 "nvme_io_md": false, 00:05:36.892 "write_zeroes": true, 00:05:36.892 "zcopy": true, 00:05:36.892 "get_zone_info": false, 00:05:36.892 "zone_management": false, 00:05:36.892 "zone_append": false, 00:05:36.892 "compare": false, 00:05:36.892 "compare_and_write": false, 00:05:36.892 "abort": true, 00:05:36.892 "seek_hole": false, 00:05:36.892 "seek_data": false, 00:05:36.892 "copy": true, 00:05:36.892 "nvme_iov_md": false 00:05:36.892 }, 00:05:36.892 "memory_domains": [ 00:05:36.892 { 00:05:36.892 "dma_device_id": "system", 00:05:36.892 "dma_device_type": 1 00:05:36.892 }, 00:05:36.892 { 00:05:36.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.892 "dma_device_type": 2 00:05:36.892 } 00:05:36.892 ], 00:05:36.892 "driver_specific": {} 00:05:36.892 } 00:05:36.892 ]' 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.892 [2024-09-27 22:22:32.734780] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:36.892 [2024-09-27 22:22:32.734853] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.892 [2024-09-27 22:22:32.734878] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:36.892 [2024-09-27 22:22:32.734893] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.892 [2024-09-27 22:22:32.737555] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.892 [2024-09-27 22:22:32.737718] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.892 Passthru0 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.892 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.152 { 00:05:37.152 "name": "Malloc2", 00:05:37.152 "aliases": [ 00:05:37.152 "8ec5b356-ece5-4ce6-86f8-43b437f10d87" 00:05:37.152 ], 00:05:37.152 "product_name": "Malloc disk", 00:05:37.152 "block_size": 512, 00:05:37.152 "num_blocks": 16384, 00:05:37.152 "uuid": "8ec5b356-ece5-4ce6-86f8-43b437f10d87", 00:05:37.152 "assigned_rate_limits": { 00:05:37.152 "rw_ios_per_sec": 0, 00:05:37.152 "rw_mbytes_per_sec": 0, 00:05:37.152 "r_mbytes_per_sec": 0, 00:05:37.152 "w_mbytes_per_sec": 0 00:05:37.152 }, 00:05:37.152 "claimed": true, 00:05:37.152 "claim_type": "exclusive_write", 00:05:37.152 "zoned": false, 00:05:37.152 "supported_io_types": { 00:05:37.152 "read": true, 00:05:37.152 "write": true, 00:05:37.152 "unmap": true, 00:05:37.152 "flush": true, 00:05:37.152 "reset": true, 00:05:37.152 "nvme_admin": false, 00:05:37.152 "nvme_io": false, 00:05:37.152 "nvme_io_md": false, 00:05:37.152 "write_zeroes": true, 00:05:37.152 "zcopy": true, 00:05:37.152 "get_zone_info": false, 00:05:37.152 "zone_management": false, 00:05:37.152 "zone_append": false, 00:05:37.152 "compare": false, 00:05:37.152 "compare_and_write": false, 00:05:37.152 "abort": true, 00:05:37.152 "seek_hole": false, 00:05:37.152 "seek_data": false, 00:05:37.152 "copy": true, 00:05:37.152 "nvme_iov_md": false 00:05:37.152 }, 00:05:37.152 "memory_domains": [ 00:05:37.152 { 00:05:37.152 "dma_device_id": "system", 00:05:37.152 "dma_device_type": 1 00:05:37.152 }, 00:05:37.152 { 00:05:37.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.152 "dma_device_type": 2 00:05:37.152 } 00:05:37.152 ], 00:05:37.152 "driver_specific": {} 00:05:37.152 }, 00:05:37.152 { 00:05:37.152 "name": "Passthru0", 00:05:37.152 "aliases": [ 00:05:37.152 "8e4d4f59-66a0-5583-b783-301e4b4b1c24" 00:05:37.152 ], 00:05:37.152 "product_name": "passthru", 00:05:37.152 "block_size": 512, 00:05:37.152 "num_blocks": 16384, 00:05:37.152 "uuid": "8e4d4f59-66a0-5583-b783-301e4b4b1c24", 00:05:37.152 "assigned_rate_limits": { 00:05:37.152 "rw_ios_per_sec": 0, 00:05:37.152 "rw_mbytes_per_sec": 0, 00:05:37.152 "r_mbytes_per_sec": 0, 00:05:37.152 "w_mbytes_per_sec": 0 00:05:37.152 }, 00:05:37.152 "claimed": false, 00:05:37.152 "zoned": false, 00:05:37.152 "supported_io_types": { 00:05:37.152 "read": true, 00:05:37.152 "write": true, 00:05:37.152 "unmap": true, 00:05:37.152 "flush": true, 00:05:37.152 "reset": true, 00:05:37.152 "nvme_admin": false, 00:05:37.152 "nvme_io": false, 00:05:37.152 "nvme_io_md": false, 00:05:37.152 "write_zeroes": true, 00:05:37.152 "zcopy": true, 00:05:37.152 "get_zone_info": false, 00:05:37.152 "zone_management": false, 00:05:37.152 "zone_append": false, 00:05:37.152 "compare": false, 00:05:37.152 "compare_and_write": false, 00:05:37.152 "abort": true, 00:05:37.152 "seek_hole": false, 00:05:37.152 "seek_data": false, 00:05:37.152 "copy": true, 00:05:37.152 "nvme_iov_md": false 00:05:37.152 }, 00:05:37.152 "memory_domains": [ 00:05:37.152 { 00:05:37.152 "dma_device_id": "system", 00:05:37.152 "dma_device_type": 1 00:05:37.152 }, 00:05:37.152 { 00:05:37.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.152 "dma_device_type": 2 00:05:37.152 } 00:05:37.152 ], 00:05:37.152 "driver_specific": { 00:05:37.152 "passthru": { 00:05:37.152 "name": "Passthru0", 00:05:37.152 "base_bdev_name": "Malloc2" 00:05:37.152 } 00:05:37.152 } 00:05:37.152 } 00:05:37.152 ]' 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.152 ************************************ 00:05:37.152 END TEST rpc_daemon_integrity 00:05:37.152 ************************************ 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.152 00:05:37.152 real 0m0.348s 00:05:37.152 user 0m0.183s 00:05:37.152 sys 0m0.061s 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.152 22:22:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.152 22:22:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:37.152 22:22:32 rpc -- rpc/rpc.sh@84 -- # killprocess 56691 00:05:37.152 22:22:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 56691 ']' 00:05:37.152 22:22:32 rpc -- common/autotest_common.sh@954 -- # kill -0 56691 00:05:37.152 22:22:32 rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.152 22:22:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.152 22:22:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56691 00:05:37.152 killing process with pid 56691 00:05:37.152 22:22:33 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.152 22:22:33 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.152 22:22:33 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56691' 00:05:37.152 22:22:33 rpc -- common/autotest_common.sh@969 -- # kill 56691 00:05:37.152 22:22:33 rpc -- common/autotest_common.sh@974 -- # wait 56691 00:05:40.469 00:05:40.469 real 0m6.593s 00:05:40.469 user 0m6.935s 00:05:40.469 sys 0m1.065s 00:05:40.469 ************************************ 00:05:40.469 END TEST rpc 00:05:40.469 ************************************ 00:05:40.469 22:22:36 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.469 22:22:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.469 22:22:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.469 22:22:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.469 22:22:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.469 22:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:40.469 ************************************ 00:05:40.469 START TEST skip_rpc 00:05:40.469 ************************************ 00:05:40.469 22:22:36 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:40.728 * Looking for test storage... 00:05:40.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.728 22:22:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.728 --rc genhtml_branch_coverage=1 00:05:40.728 --rc genhtml_function_coverage=1 00:05:40.728 --rc genhtml_legend=1 00:05:40.728 --rc geninfo_all_blocks=1 00:05:40.728 --rc geninfo_unexecuted_blocks=1 00:05:40.728 00:05:40.728 ' 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.728 --rc genhtml_branch_coverage=1 00:05:40.728 --rc genhtml_function_coverage=1 00:05:40.728 --rc genhtml_legend=1 00:05:40.728 --rc geninfo_all_blocks=1 00:05:40.728 --rc geninfo_unexecuted_blocks=1 00:05:40.728 00:05:40.728 ' 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.728 --rc genhtml_branch_coverage=1 00:05:40.728 --rc genhtml_function_coverage=1 00:05:40.728 --rc genhtml_legend=1 00:05:40.728 --rc geninfo_all_blocks=1 00:05:40.728 --rc geninfo_unexecuted_blocks=1 00:05:40.728 00:05:40.728 ' 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.728 --rc genhtml_branch_coverage=1 00:05:40.728 --rc genhtml_function_coverage=1 00:05:40.728 --rc genhtml_legend=1 00:05:40.728 --rc geninfo_all_blocks=1 00:05:40.728 --rc geninfo_unexecuted_blocks=1 00:05:40.728 00:05:40.728 ' 00:05:40.728 22:22:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.728 22:22:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:40.728 22:22:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.728 22:22:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.728 ************************************ 00:05:40.728 START TEST skip_rpc 00:05:40.728 ************************************ 00:05:40.728 22:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:40.728 22:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56938 00:05:40.728 22:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.728 22:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.728 22:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.987 [2024-09-27 22:22:36.658898] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:05:40.987 [2024-09-27 22:22:36.659277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56938 ] 00:05:40.987 [2024-09-27 22:22:36.831444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.254 [2024-09-27 22:22:37.067456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56938 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56938 ']' 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56938 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56938 00:05:46.543 killing process with pid 56938 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56938' 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56938 00:05:46.543 22:22:41 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56938 00:05:49.077 00:05:49.077 real 0m8.400s 00:05:49.077 user 0m7.818s 00:05:49.077 sys 0m0.489s 00:05:49.077 22:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.077 22:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.077 ************************************ 00:05:49.077 END TEST skip_rpc 00:05:49.077 ************************************ 00:05:49.336 22:22:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.336 22:22:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.336 22:22:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.336 22:22:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.336 ************************************ 00:05:49.336 START TEST skip_rpc_with_json 00:05:49.336 ************************************ 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57053 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57053 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57053 ']' 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.336 22:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.336 [2024-09-27 22:22:45.139285] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:05:49.336 [2024-09-27 22:22:45.139634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57053 ] 00:05:49.595 [2024-09-27 22:22:45.310717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.855 [2024-09-27 22:22:45.542695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.244 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.244 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.245 [2024-09-27 22:22:46.842453] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:51.245 request: 00:05:51.245 { 00:05:51.245 "trtype": "tcp", 00:05:51.245 "method": "nvmf_get_transports", 00:05:51.245 "req_id": 1 00:05:51.245 } 00:05:51.245 Got JSON-RPC error response 00:05:51.245 response: 00:05:51.245 { 00:05:51.245 "code": -19, 00:05:51.245 "message": "No such device" 00:05:51.245 } 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.245 [2024-09-27 22:22:46.858653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.245 22:22:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.245 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.245 22:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.245 { 00:05:51.245 "subsystems": [ 00:05:51.245 { 00:05:51.245 "subsystem": "fsdev", 00:05:51.245 "config": [ 00:05:51.245 { 00:05:51.245 "method": "fsdev_set_opts", 00:05:51.245 "params": { 00:05:51.245 "fsdev_io_pool_size": 65535, 00:05:51.245 "fsdev_io_cache_size": 256 00:05:51.245 } 00:05:51.245 } 00:05:51.245 ] 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "subsystem": "keyring", 00:05:51.245 "config": [] 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "subsystem": "iobuf", 00:05:51.245 "config": [ 00:05:51.245 { 00:05:51.245 "method": "iobuf_set_options", 00:05:51.245 "params": { 00:05:51.245 "small_pool_count": 8192, 00:05:51.245 "large_pool_count": 1024, 00:05:51.245 "small_bufsize": 8192, 00:05:51.245 "large_bufsize": 135168 00:05:51.245 } 00:05:51.245 } 00:05:51.245 ] 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "subsystem": "sock", 00:05:51.245 "config": [ 00:05:51.245 { 00:05:51.245 "method": "sock_set_default_impl", 00:05:51.245 "params": { 00:05:51.245 "impl_name": "posix" 00:05:51.245 } 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "method": "sock_impl_set_options", 00:05:51.245 "params": { 00:05:51.245 "impl_name": "ssl", 00:05:51.245 "recv_buf_size": 4096, 00:05:51.245 "send_buf_size": 4096, 00:05:51.245 "enable_recv_pipe": true, 00:05:51.245 "enable_quickack": false, 00:05:51.245 "enable_placement_id": 0, 00:05:51.245 "enable_zerocopy_send_server": true, 00:05:51.245 "enable_zerocopy_send_client": false, 00:05:51.245 "zerocopy_threshold": 0, 00:05:51.245 "tls_version": 0, 00:05:51.245 "enable_ktls": false 00:05:51.245 } 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "method": "sock_impl_set_options", 00:05:51.245 "params": { 00:05:51.245 "impl_name": "posix", 00:05:51.245 "recv_buf_size": 2097152, 00:05:51.245 "send_buf_size": 2097152, 00:05:51.245 "enable_recv_pipe": true, 00:05:51.245 "enable_quickack": false, 00:05:51.245 "enable_placement_id": 0, 00:05:51.245 "enable_zerocopy_send_server": true, 00:05:51.245 "enable_zerocopy_send_client": false, 00:05:51.245 "zerocopy_threshold": 0, 00:05:51.245 "tls_version": 0, 00:05:51.245 "enable_ktls": false 00:05:51.245 } 00:05:51.245 } 00:05:51.245 ] 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "subsystem": "vmd", 00:05:51.245 "config": [] 00:05:51.245 }, 00:05:51.245 { 00:05:51.245 "subsystem": "accel", 00:05:51.246 "config": [ 00:05:51.246 { 00:05:51.246 "method": "accel_set_options", 00:05:51.246 "params": { 00:05:51.246 "small_cache_size": 128, 00:05:51.246 "large_cache_size": 16, 00:05:51.246 "task_count": 2048, 00:05:51.246 "sequence_count": 2048, 00:05:51.246 "buf_count": 2048 00:05:51.246 } 00:05:51.246 } 00:05:51.246 ] 00:05:51.246 }, 00:05:51.246 { 00:05:51.246 "subsystem": "bdev", 00:05:51.246 "config": [ 00:05:51.246 { 00:05:51.246 "method": "bdev_set_options", 00:05:51.246 "params": { 00:05:51.246 "bdev_io_pool_size": 65535, 00:05:51.246 "bdev_io_cache_size": 256, 00:05:51.246 "bdev_auto_examine": true, 00:05:51.246 "iobuf_small_cache_size": 128, 00:05:51.246 "iobuf_large_cache_size": 16, 00:05:51.246 "bdev_io_stack_size": 4096 00:05:51.246 } 00:05:51.246 }, 00:05:51.246 { 00:05:51.246 "method": "bdev_raid_set_options", 00:05:51.246 "params": { 00:05:51.246 "process_window_size_kb": 1024, 00:05:51.246 "process_max_bandwidth_mb_sec": 0 00:05:51.246 } 00:05:51.246 }, 00:05:51.246 { 00:05:51.246 "method": "bdev_iscsi_set_options", 00:05:51.246 "params": { 00:05:51.246 "timeout_sec": 30 00:05:51.246 } 00:05:51.246 }, 00:05:51.246 { 00:05:51.246 "method": "bdev_nvme_set_options", 00:05:51.246 "params": { 00:05:51.246 "action_on_timeout": "none", 00:05:51.246 "timeout_us": 0, 00:05:51.246 "timeout_admin_us": 0, 00:05:51.246 "keep_alive_timeout_ms": 10000, 00:05:51.246 "arbitration_burst": 0, 00:05:51.246 "low_priority_weight": 0, 00:05:51.246 "medium_priority_weight": 0, 00:05:51.246 "high_priority_weight": 0, 00:05:51.246 "nvme_adminq_poll_period_us": 10000, 00:05:51.246 "nvme_ioq_poll_period_us": 0, 00:05:51.246 "io_queue_requests": 0, 00:05:51.246 "delay_cmd_submit": true, 00:05:51.246 "transport_retry_count": 4, 00:05:51.246 "bdev_retry_count": 3, 00:05:51.246 "transport_ack_timeout": 0, 00:05:51.246 "ctrlr_loss_timeout_sec": 0, 00:05:51.246 "reconnect_delay_sec": 0, 00:05:51.246 "fast_io_fail_timeout_sec": 0, 00:05:51.246 "disable_auto_failback": false, 00:05:51.246 "generate_uuids": false, 00:05:51.246 "transport_tos": 0, 00:05:51.246 "nvme_error_stat": false, 00:05:51.246 "rdma_srq_size": 0, 00:05:51.246 "io_path_stat": false, 00:05:51.246 "allow_accel_sequence": false, 00:05:51.246 "rdma_max_cq_size": 0, 00:05:51.246 "rdma_cm_event_timeout_ms": 0, 00:05:51.246 "dhchap_digests": [ 00:05:51.246 "sha256", 00:05:51.246 "sha384", 00:05:51.246 "sha512" 00:05:51.246 ], 00:05:51.246 "dhchap_dhgroups": [ 00:05:51.246 "null", 00:05:51.246 "ffdhe2048", 00:05:51.246 "ffdhe3072", 00:05:51.246 "ffdhe4096", 00:05:51.246 "ffdhe6144", 00:05:51.246 "ffdhe8192" 00:05:51.246 ] 00:05:51.246 } 00:05:51.246 }, 00:05:51.246 { 00:05:51.246 "method": "bdev_nvme_set_hotplug", 00:05:51.246 "params": { 00:05:51.246 "period_us": 100000, 00:05:51.246 "enable": false 00:05:51.246 } 00:05:51.246 }, 00:05:51.246 { 00:05:51.246 "method": "bdev_wait_for_examine" 00:05:51.247 } 00:05:51.247 ] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "scsi", 00:05:51.247 "config": null 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "scheduler", 00:05:51.247 "config": [ 00:05:51.247 { 00:05:51.247 "method": "framework_set_scheduler", 00:05:51.247 "params": { 00:05:51.247 "name": "static" 00:05:51.247 } 00:05:51.247 } 00:05:51.247 ] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "vhost_scsi", 00:05:51.247 "config": [] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "vhost_blk", 00:05:51.247 "config": [] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "ublk", 00:05:51.247 "config": [] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "nbd", 00:05:51.247 "config": [] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "nvmf", 00:05:51.247 "config": [ 00:05:51.247 { 00:05:51.247 "method": "nvmf_set_config", 00:05:51.247 "params": { 00:05:51.247 "discovery_filter": "match_any", 00:05:51.247 "admin_cmd_passthru": { 00:05:51.247 "identify_ctrlr": false 00:05:51.247 }, 00:05:51.247 "dhchap_digests": [ 00:05:51.247 "sha256", 00:05:51.247 "sha384", 00:05:51.247 "sha512" 00:05:51.247 ], 00:05:51.247 "dhchap_dhgroups": [ 00:05:51.247 "null", 00:05:51.247 "ffdhe2048", 00:05:51.247 "ffdhe3072", 00:05:51.247 "ffdhe4096", 00:05:51.247 "ffdhe6144", 00:05:51.247 "ffdhe8192" 00:05:51.247 ] 00:05:51.247 } 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "method": "nvmf_set_max_subsystems", 00:05:51.247 "params": { 00:05:51.247 "max_subsystems": 1024 00:05:51.247 } 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "method": "nvmf_set_crdt", 00:05:51.247 "params": { 00:05:51.247 "crdt1": 0, 00:05:51.247 "crdt2": 0, 00:05:51.247 "crdt3": 0 00:05:51.247 } 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "method": "nvmf_create_transport", 00:05:51.247 "params": { 00:05:51.247 "trtype": "TCP", 00:05:51.247 "max_queue_depth": 128, 00:05:51.247 "max_io_qpairs_per_ctrlr": 127, 00:05:51.247 "in_capsule_data_size": 4096, 00:05:51.247 "max_io_size": 131072, 00:05:51.247 "io_unit_size": 131072, 00:05:51.247 "max_aq_depth": 128, 00:05:51.247 "num_shared_buffers": 511, 00:05:51.247 "buf_cache_size": 4294967295, 00:05:51.247 "dif_insert_or_strip": false, 00:05:51.247 "zcopy": false, 00:05:51.247 "c2h_success": true, 00:05:51.247 "sock_priority": 0, 00:05:51.247 "abort_timeout_sec": 1, 00:05:51.247 "ack_timeout": 0, 00:05:51.247 "data_wr_pool_size": 0 00:05:51.247 } 00:05:51.247 } 00:05:51.247 ] 00:05:51.247 }, 00:05:51.247 { 00:05:51.247 "subsystem": "iscsi", 00:05:51.247 "config": [ 00:05:51.247 { 00:05:51.247 "method": "iscsi_set_options", 00:05:51.247 "params": { 00:05:51.247 "node_base": "iqn.2016-06.io.spdk", 00:05:51.247 "max_sessions": 128, 00:05:51.247 "max_connections_per_session": 2, 00:05:51.247 "max_queue_depth": 64, 00:05:51.247 "default_time2wait": 2, 00:05:51.247 "default_time2retain": 20, 00:05:51.247 "first_burst_length": 8192, 00:05:51.247 "immediate_data": true, 00:05:51.247 "allow_duplicated_isid": false, 00:05:51.247 "error_recovery_level": 0, 00:05:51.247 "nop_timeout": 60, 00:05:51.247 "nop_in_interval": 30, 00:05:51.247 "disable_chap": false, 00:05:51.247 "require_chap": false, 00:05:51.247 "mutual_chap": false, 00:05:51.247 "chap_group": 0, 00:05:51.247 "max_large_datain_per_connection": 64, 00:05:51.247 "max_r2t_per_connection": 4, 00:05:51.247 "pdu_pool_size": 36864, 00:05:51.247 "immediate_data_pool_size": 16384, 00:05:51.247 "data_out_pool_size": 2048 00:05:51.247 } 00:05:51.247 } 00:05:51.247 ] 00:05:51.247 } 00:05:51.247 ] 00:05:51.247 } 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57053 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57053 ']' 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57053 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57053 00:05:51.247 killing process with pid 57053 00:05:51.247 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.248 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.248 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57053' 00:05:51.248 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57053 00:05:51.248 22:22:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57053 00:05:54.574 22:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57120 00:05:54.574 22:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.574 22:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57120 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57120 ']' 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57120 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57120 00:05:59.843 killing process with pid 57120 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57120' 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57120 00:05:59.843 22:22:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57120 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.157 00:06:03.157 real 0m13.621s 00:06:03.157 user 0m12.892s 00:06:03.157 sys 0m1.042s 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.157 ************************************ 00:06:03.157 END TEST skip_rpc_with_json 00:06:03.157 ************************************ 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.157 22:22:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.157 22:22:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.157 22:22:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.157 22:22:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.157 ************************************ 00:06:03.157 START TEST skip_rpc_with_delay 00:06:03.157 ************************************ 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.157 [2024-09-27 22:22:58.821773] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.157 [2024-09-27 22:22:58.821916] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.157 00:06:03.157 real 0m0.181s 00:06:03.157 user 0m0.092s 00:06:03.157 sys 0m0.088s 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.157 ************************************ 00:06:03.157 END TEST skip_rpc_with_delay 00:06:03.157 ************************************ 00:06:03.157 22:22:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.157 22:22:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.157 22:22:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.157 22:22:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.157 22:22:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.157 22:22:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.157 22:22:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.157 ************************************ 00:06:03.157 START TEST exit_on_failed_rpc_init 00:06:03.157 ************************************ 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57259 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57259 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57259 ']' 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.157 22:22:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.415 [2024-09-27 22:22:59.065789] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:03.415 [2024-09-27 22:22:59.066164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57259 ] 00:06:03.415 [2024-09-27 22:22:59.235363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.674 [2024-09-27 22:22:59.471414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.048 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.048 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:05.048 22:23:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:05.049 22:23:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.049 [2024-09-27 22:23:00.852615] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:05.049 [2024-09-27 22:23:00.852967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57288 ] 00:06:05.307 [2024-09-27 22:23:01.026339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.566 [2024-09-27 22:23:01.258711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.566 [2024-09-27 22:23:01.258981] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.566 [2024-09-27 22:23:01.259006] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.566 [2024-09-27 22:23:01.259021] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57259 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57259 ']' 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57259 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.833 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57259 00:06:06.092 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.093 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.093 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57259' 00:06:06.093 killing process with pid 57259 00:06:06.093 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57259 00:06:06.093 22:23:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57259 00:06:09.379 00:06:09.379 real 0m6.037s 00:06:09.379 user 0m6.552s 00:06:09.379 sys 0m0.724s 00:06:09.379 22:23:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.379 ************************************ 00:06:09.379 END TEST exit_on_failed_rpc_init 00:06:09.379 ************************************ 00:06:09.379 22:23:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.379 22:23:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.379 ************************************ 00:06:09.379 END TEST skip_rpc 00:06:09.379 ************************************ 00:06:09.379 00:06:09.379 real 0m28.755s 00:06:09.379 user 0m27.568s 00:06:09.379 sys 0m2.649s 00:06:09.379 22:23:05 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.379 22:23:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.379 22:23:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:09.379 22:23:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.379 22:23:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.379 22:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.379 ************************************ 00:06:09.379 START TEST rpc_client 00:06:09.379 ************************************ 00:06:09.379 22:23:05 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:09.637 * Looking for test storage... 00:06:09.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:09.637 22:23:05 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.637 22:23:05 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.637 22:23:05 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.637 22:23:05 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.637 22:23:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.638 22:23:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.638 22:23:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.638 --rc genhtml_branch_coverage=1 00:06:09.638 --rc genhtml_function_coverage=1 00:06:09.638 --rc genhtml_legend=1 00:06:09.638 --rc geninfo_all_blocks=1 00:06:09.638 --rc geninfo_unexecuted_blocks=1 00:06:09.638 00:06:09.638 ' 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.638 --rc genhtml_branch_coverage=1 00:06:09.638 --rc genhtml_function_coverage=1 00:06:09.638 --rc genhtml_legend=1 00:06:09.638 --rc geninfo_all_blocks=1 00:06:09.638 --rc geninfo_unexecuted_blocks=1 00:06:09.638 00:06:09.638 ' 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.638 --rc genhtml_branch_coverage=1 00:06:09.638 --rc genhtml_function_coverage=1 00:06:09.638 --rc genhtml_legend=1 00:06:09.638 --rc geninfo_all_blocks=1 00:06:09.638 --rc geninfo_unexecuted_blocks=1 00:06:09.638 00:06:09.638 ' 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.638 --rc genhtml_branch_coverage=1 00:06:09.638 --rc genhtml_function_coverage=1 00:06:09.638 --rc genhtml_legend=1 00:06:09.638 --rc geninfo_all_blocks=1 00:06:09.638 --rc geninfo_unexecuted_blocks=1 00:06:09.638 00:06:09.638 ' 00:06:09.638 22:23:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:09.638 OK 00:06:09.638 22:23:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:09.638 00:06:09.638 real 0m0.314s 00:06:09.638 user 0m0.167s 00:06:09.638 sys 0m0.162s 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.638 22:23:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:09.638 ************************************ 00:06:09.638 END TEST rpc_client 00:06:09.638 ************************************ 00:06:09.638 22:23:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.638 22:23:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.638 22:23:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.638 22:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:09.898 ************************************ 00:06:09.898 START TEST json_config 00:06:09.898 ************************************ 00:06:09.898 22:23:05 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.898 22:23:05 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.898 22:23:05 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.898 22:23:05 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.898 22:23:05 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.898 22:23:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.898 22:23:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.898 22:23:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.898 22:23:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.898 22:23:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.898 22:23:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.898 22:23:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.898 22:23:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.899 22:23:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.899 22:23:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.899 22:23:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.899 22:23:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:09.899 22:23:05 json_config -- scripts/common.sh@345 -- # : 1 00:06:09.899 22:23:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.899 22:23:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.899 22:23:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:09.899 22:23:05 json_config -- scripts/common.sh@353 -- # local d=1 00:06:09.899 22:23:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.899 22:23:05 json_config -- scripts/common.sh@355 -- # echo 1 00:06:09.899 22:23:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.899 22:23:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:09.899 22:23:05 json_config -- scripts/common.sh@353 -- # local d=2 00:06:09.899 22:23:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.899 22:23:05 json_config -- scripts/common.sh@355 -- # echo 2 00:06:09.899 22:23:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.899 22:23:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.899 22:23:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.899 22:23:05 json_config -- scripts/common.sh@368 -- # return 0 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.899 --rc genhtml_branch_coverage=1 00:06:09.899 --rc genhtml_function_coverage=1 00:06:09.899 --rc genhtml_legend=1 00:06:09.899 --rc geninfo_all_blocks=1 00:06:09.899 --rc geninfo_unexecuted_blocks=1 00:06:09.899 00:06:09.899 ' 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.899 --rc genhtml_branch_coverage=1 00:06:09.899 --rc genhtml_function_coverage=1 00:06:09.899 --rc genhtml_legend=1 00:06:09.899 --rc geninfo_all_blocks=1 00:06:09.899 --rc geninfo_unexecuted_blocks=1 00:06:09.899 00:06:09.899 ' 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.899 --rc genhtml_branch_coverage=1 00:06:09.899 --rc genhtml_function_coverage=1 00:06:09.899 --rc genhtml_legend=1 00:06:09.899 --rc geninfo_all_blocks=1 00:06:09.899 --rc geninfo_unexecuted_blocks=1 00:06:09.899 00:06:09.899 ' 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.899 --rc genhtml_branch_coverage=1 00:06:09.899 --rc genhtml_function_coverage=1 00:06:09.899 --rc genhtml_legend=1 00:06:09.899 --rc geninfo_all_blocks=1 00:06:09.899 --rc geninfo_unexecuted_blocks=1 00:06:09.899 00:06:09.899 ' 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0088fc5-d467-4219-97da-2837b0f3aecb 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e0088fc5-d467-4219-97da-2837b0f3aecb 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.899 22:23:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.899 22:23:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.899 22:23:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.899 22:23:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.899 22:23:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.899 22:23:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.899 22:23:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.899 22:23:05 json_config -- paths/export.sh@5 -- # export PATH 00:06:09.899 22:23:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@51 -- # : 0 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.899 22:23:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.899 WARNING: No tests are enabled so not running JSON configuration tests 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:09.899 22:23:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:09.899 ************************************ 00:06:09.899 END TEST json_config 00:06:09.899 ************************************ 00:06:09.899 00:06:09.899 real 0m0.230s 00:06:09.899 user 0m0.123s 00:06:09.899 sys 0m0.109s 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.899 22:23:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.178 22:23:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.178 22:23:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.178 22:23:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.178 22:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:10.178 ************************************ 00:06:10.178 START TEST json_config_extra_key 00:06:10.178 ************************************ 00:06:10.178 22:23:05 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.178 22:23:05 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.178 22:23:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.178 22:23:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.178 22:23:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.178 22:23:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:10.178 22:23:06 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.178 22:23:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.178 --rc genhtml_branch_coverage=1 00:06:10.178 --rc genhtml_function_coverage=1 00:06:10.178 --rc genhtml_legend=1 00:06:10.178 --rc geninfo_all_blocks=1 00:06:10.178 --rc geninfo_unexecuted_blocks=1 00:06:10.178 00:06:10.178 ' 00:06:10.178 22:23:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.178 --rc genhtml_branch_coverage=1 00:06:10.178 --rc genhtml_function_coverage=1 00:06:10.178 --rc genhtml_legend=1 00:06:10.178 --rc geninfo_all_blocks=1 00:06:10.178 --rc geninfo_unexecuted_blocks=1 00:06:10.178 00:06:10.178 ' 00:06:10.178 22:23:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.178 --rc genhtml_branch_coverage=1 00:06:10.178 --rc genhtml_function_coverage=1 00:06:10.178 --rc genhtml_legend=1 00:06:10.178 --rc geninfo_all_blocks=1 00:06:10.178 --rc geninfo_unexecuted_blocks=1 00:06:10.178 00:06:10.178 ' 00:06:10.178 22:23:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.178 --rc genhtml_branch_coverage=1 00:06:10.178 --rc genhtml_function_coverage=1 00:06:10.178 --rc genhtml_legend=1 00:06:10.178 --rc geninfo_all_blocks=1 00:06:10.178 --rc geninfo_unexecuted_blocks=1 00:06:10.178 00:06:10.178 ' 00:06:10.178 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0088fc5-d467-4219-97da-2837b0f3aecb 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e0088fc5-d467-4219-97da-2837b0f3aecb 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.178 22:23:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.178 22:23:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.178 22:23:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.178 22:23:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.178 22:23:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.178 22:23:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.178 22:23:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.178 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1536') 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.437 INFO: launching applications... 00:06:10.437 22:23:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57509 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.437 Waiting for target to run... 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57509 /var/tmp/spdk_tgt.sock 00:06:10.437 22:23:06 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57509 ']' 00:06:10.437 22:23:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1536 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.437 22:23:06 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.437 22:23:06 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.437 22:23:06 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.437 22:23:06 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.437 22:23:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.437 [2024-09-27 22:23:06.171187] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:10.437 [2024-09-27 22:23:06.171846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1536 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57509 ] 00:06:11.005 [2024-09-27 22:23:06.667488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.264 [2024-09-27 22:23:06.882771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.204 22:23:07 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.204 00:06:12.204 22:23:07 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:12.204 22:23:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.204 INFO: shutting down applications... 00:06:12.204 22:23:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.204 22:23:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.204 22:23:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57509 ]] 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57509 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:12.205 22:23:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.788 22:23:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.788 22:23:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.788 22:23:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:12.788 22:23:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.365 22:23:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.365 22:23:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.365 22:23:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:13.365 22:23:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.632 22:23:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.632 22:23:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.632 22:23:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:13.633 22:23:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.214 22:23:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.214 22:23:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.214 22:23:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:14.214 22:23:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.780 22:23:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.780 22:23:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.780 22:23:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:14.780 22:23:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.348 22:23:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.348 22:23:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.348 22:23:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:15.348 22:23:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.916 22:23:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.916 22:23:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.916 22:23:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:15.916 22:23:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.174 22:23:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.174 22:23:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.174 22:23:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:16.174 22:23:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57509 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:16.741 SPDK target shutdown done 00:06:16.741 Success 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.741 22:23:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.741 22:23:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:16.741 00:06:16.741 real 0m6.712s 00:06:16.741 user 0m5.837s 00:06:16.741 sys 0m0.786s 00:06:16.741 22:23:12 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.741 22:23:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.741 ************************************ 00:06:16.741 END TEST json_config_extra_key 00:06:16.741 ************************************ 00:06:16.741 22:23:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.741 22:23:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.741 22:23:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.741 22:23:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.741 ************************************ 00:06:16.741 START TEST alias_rpc 00:06:16.741 ************************************ 00:06:16.742 22:23:12 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.001 * Looking for test storage... 00:06:17.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.001 22:23:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.001 --rc genhtml_branch_coverage=1 00:06:17.001 --rc genhtml_function_coverage=1 00:06:17.001 --rc genhtml_legend=1 00:06:17.001 --rc geninfo_all_blocks=1 00:06:17.001 --rc geninfo_unexecuted_blocks=1 00:06:17.001 00:06:17.001 ' 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.001 --rc genhtml_branch_coverage=1 00:06:17.001 --rc genhtml_function_coverage=1 00:06:17.001 --rc genhtml_legend=1 00:06:17.001 --rc geninfo_all_blocks=1 00:06:17.001 --rc geninfo_unexecuted_blocks=1 00:06:17.001 00:06:17.001 ' 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.001 --rc genhtml_branch_coverage=1 00:06:17.001 --rc genhtml_function_coverage=1 00:06:17.001 --rc genhtml_legend=1 00:06:17.001 --rc geninfo_all_blocks=1 00:06:17.001 --rc geninfo_unexecuted_blocks=1 00:06:17.001 00:06:17.001 ' 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.001 --rc genhtml_branch_coverage=1 00:06:17.001 --rc genhtml_function_coverage=1 00:06:17.001 --rc genhtml_legend=1 00:06:17.001 --rc geninfo_all_blocks=1 00:06:17.001 --rc geninfo_unexecuted_blocks=1 00:06:17.001 00:06:17.001 ' 00:06:17.001 22:23:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.001 22:23:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57651 00:06:17.001 22:23:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.001 22:23:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57651 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57651 ']' 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.001 22:23:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.263 [2024-09-27 22:23:12.958048] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:17.263 [2024-09-27 22:23:12.958437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57651 ] 00:06:17.263 [2024-09-27 22:23:13.129826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.522 [2024-09-27 22:23:13.358395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.903 22:23:14 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.903 22:23:14 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.903 22:23:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:19.161 22:23:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57651 00:06:19.161 22:23:14 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57651 ']' 00:06:19.161 22:23:14 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57651 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57651 00:06:19.162 killing process with pid 57651 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57651' 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@969 -- # kill 57651 00:06:19.162 22:23:14 alias_rpc -- common/autotest_common.sh@974 -- # wait 57651 00:06:22.491 ************************************ 00:06:22.491 END TEST alias_rpc 00:06:22.491 ************************************ 00:06:22.491 00:06:22.491 real 0m5.566s 00:06:22.491 user 0m5.431s 00:06:22.491 sys 0m0.702s 00:06:22.491 22:23:18 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.491 22:23:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.491 22:23:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:22.491 22:23:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:22.491 22:23:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.491 22:23:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.491 22:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:22.491 ************************************ 00:06:22.491 START TEST spdkcli_tcp 00:06:22.491 ************************************ 00:06:22.491 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:22.491 * Looking for test storage... 00:06:22.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:22.491 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.491 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.491 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.751 22:23:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.751 --rc genhtml_branch_coverage=1 00:06:22.751 --rc genhtml_function_coverage=1 00:06:22.751 --rc genhtml_legend=1 00:06:22.751 --rc geninfo_all_blocks=1 00:06:22.751 --rc geninfo_unexecuted_blocks=1 00:06:22.751 00:06:22.751 ' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.751 --rc genhtml_branch_coverage=1 00:06:22.751 --rc genhtml_function_coverage=1 00:06:22.751 --rc genhtml_legend=1 00:06:22.751 --rc geninfo_all_blocks=1 00:06:22.751 --rc geninfo_unexecuted_blocks=1 00:06:22.751 00:06:22.751 ' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.751 --rc genhtml_branch_coverage=1 00:06:22.751 --rc genhtml_function_coverage=1 00:06:22.751 --rc genhtml_legend=1 00:06:22.751 --rc geninfo_all_blocks=1 00:06:22.751 --rc geninfo_unexecuted_blocks=1 00:06:22.751 00:06:22.751 ' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.751 --rc genhtml_branch_coverage=1 00:06:22.751 --rc genhtml_function_coverage=1 00:06:22.751 --rc genhtml_legend=1 00:06:22.751 --rc geninfo_all_blocks=1 00:06:22.751 --rc geninfo_unexecuted_blocks=1 00:06:22.751 00:06:22.751 ' 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57769 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:22.751 22:23:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57769 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57769 ']' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.751 22:23:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.751 [2024-09-27 22:23:18.573875] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:22.751 [2024-09-27 22:23:18.574028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57769 ] 00:06:23.010 [2024-09-27 22:23:18.745186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.269 [2024-09-27 22:23:18.979878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.269 [2024-09-27 22:23:18.979916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.646 22:23:20 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.646 22:23:20 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:24.646 22:23:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57797 00:06:24.646 22:23:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.646 22:23:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.904 [ 00:06:24.904 "bdev_malloc_delete", 00:06:24.904 "bdev_malloc_create", 00:06:24.904 "bdev_null_resize", 00:06:24.904 "bdev_null_delete", 00:06:24.904 "bdev_null_create", 00:06:24.904 "bdev_nvme_cuse_unregister", 00:06:24.904 "bdev_nvme_cuse_register", 00:06:24.904 "bdev_opal_new_user", 00:06:24.904 "bdev_opal_set_lock_state", 00:06:24.904 "bdev_opal_delete", 00:06:24.904 "bdev_opal_get_info", 00:06:24.904 "bdev_opal_create", 00:06:24.904 "bdev_nvme_opal_revert", 00:06:24.904 "bdev_nvme_opal_init", 00:06:24.904 "bdev_nvme_send_cmd", 00:06:24.904 "bdev_nvme_set_keys", 00:06:24.904 "bdev_nvme_get_path_iostat", 00:06:24.904 "bdev_nvme_get_mdns_discovery_info", 00:06:24.904 "bdev_nvme_stop_mdns_discovery", 00:06:24.904 "bdev_nvme_start_mdns_discovery", 00:06:24.904 "bdev_nvme_set_multipath_policy", 00:06:24.904 "bdev_nvme_set_preferred_path", 00:06:24.904 "bdev_nvme_get_io_paths", 00:06:24.904 "bdev_nvme_remove_error_injection", 00:06:24.904 "bdev_nvme_add_error_injection", 00:06:24.904 "bdev_nvme_get_discovery_info", 00:06:24.904 "bdev_nvme_stop_discovery", 00:06:24.904 "bdev_nvme_start_discovery", 00:06:24.904 "bdev_nvme_get_controller_health_info", 00:06:24.904 "bdev_nvme_disable_controller", 00:06:24.904 "bdev_nvme_enable_controller", 00:06:24.904 "bdev_nvme_reset_controller", 00:06:24.904 "bdev_nvme_get_transport_statistics", 00:06:24.904 "bdev_nvme_apply_firmware", 00:06:24.904 "bdev_nvme_detach_controller", 00:06:24.904 "bdev_nvme_get_controllers", 00:06:24.904 "bdev_nvme_attach_controller", 00:06:24.904 "bdev_nvme_set_hotplug", 00:06:24.904 "bdev_nvme_set_options", 00:06:24.904 "bdev_passthru_delete", 00:06:24.904 "bdev_passthru_create", 00:06:24.904 "bdev_lvol_set_parent_bdev", 00:06:24.904 "bdev_lvol_set_parent", 00:06:24.904 "bdev_lvol_check_shallow_copy", 00:06:24.904 "bdev_lvol_start_shallow_copy", 00:06:24.904 "bdev_lvol_grow_lvstore", 00:06:24.904 "bdev_lvol_get_lvols", 00:06:24.904 "bdev_lvol_get_lvstores", 00:06:24.904 "bdev_lvol_delete", 00:06:24.904 "bdev_lvol_set_read_only", 00:06:24.904 "bdev_lvol_resize", 00:06:24.904 "bdev_lvol_decouple_parent", 00:06:24.904 "bdev_lvol_inflate", 00:06:24.904 "bdev_lvol_rename", 00:06:24.904 "bdev_lvol_clone_bdev", 00:06:24.904 "bdev_lvol_clone", 00:06:24.904 "bdev_lvol_snapshot", 00:06:24.904 "bdev_lvol_create", 00:06:24.904 "bdev_lvol_delete_lvstore", 00:06:24.904 "bdev_lvol_rename_lvstore", 00:06:24.904 "bdev_lvol_create_lvstore", 00:06:24.904 "bdev_raid_set_options", 00:06:24.904 "bdev_raid_remove_base_bdev", 00:06:24.904 "bdev_raid_add_base_bdev", 00:06:24.904 "bdev_raid_delete", 00:06:24.904 "bdev_raid_create", 00:06:24.904 "bdev_raid_get_bdevs", 00:06:24.904 "bdev_error_inject_error", 00:06:24.904 "bdev_error_delete", 00:06:24.904 "bdev_error_create", 00:06:24.904 "bdev_split_delete", 00:06:24.904 "bdev_split_create", 00:06:24.904 "bdev_delay_delete", 00:06:24.904 "bdev_delay_create", 00:06:24.904 "bdev_delay_update_latency", 00:06:24.904 "bdev_zone_block_delete", 00:06:24.904 "bdev_zone_block_create", 00:06:24.904 "blobfs_create", 00:06:24.904 "blobfs_detect", 00:06:24.904 "blobfs_set_cache_size", 00:06:24.904 "bdev_aio_delete", 00:06:24.904 "bdev_aio_rescan", 00:06:24.904 "bdev_aio_create", 00:06:24.904 "bdev_ftl_set_property", 00:06:24.904 "bdev_ftl_get_properties", 00:06:24.904 "bdev_ftl_get_stats", 00:06:24.904 "bdev_ftl_unmap", 00:06:24.904 "bdev_ftl_unload", 00:06:24.904 "bdev_ftl_delete", 00:06:24.904 "bdev_ftl_load", 00:06:24.904 "bdev_ftl_create", 00:06:24.904 "bdev_virtio_attach_controller", 00:06:24.904 "bdev_virtio_scsi_get_devices", 00:06:24.904 "bdev_virtio_detach_controller", 00:06:24.904 "bdev_virtio_blk_set_hotplug", 00:06:24.904 "bdev_iscsi_delete", 00:06:24.904 "bdev_iscsi_create", 00:06:24.904 "bdev_iscsi_set_options", 00:06:24.904 "accel_error_inject_error", 00:06:24.904 "ioat_scan_accel_module", 00:06:24.904 "dsa_scan_accel_module", 00:06:24.904 "iaa_scan_accel_module", 00:06:24.904 "keyring_file_remove_key", 00:06:24.904 "keyring_file_add_key", 00:06:24.904 "keyring_linux_set_options", 00:06:24.904 "fsdev_aio_delete", 00:06:24.904 "fsdev_aio_create", 00:06:24.904 "iscsi_get_histogram", 00:06:24.904 "iscsi_enable_histogram", 00:06:24.904 "iscsi_set_options", 00:06:24.904 "iscsi_get_auth_groups", 00:06:24.904 "iscsi_auth_group_remove_secret", 00:06:24.904 "iscsi_auth_group_add_secret", 00:06:24.904 "iscsi_delete_auth_group", 00:06:24.904 "iscsi_create_auth_group", 00:06:24.904 "iscsi_set_discovery_auth", 00:06:24.904 "iscsi_get_options", 00:06:24.905 "iscsi_target_node_request_logout", 00:06:24.905 "iscsi_target_node_set_redirect", 00:06:24.905 "iscsi_target_node_set_auth", 00:06:24.905 "iscsi_target_node_add_lun", 00:06:24.905 "iscsi_get_stats", 00:06:24.905 "iscsi_get_connections", 00:06:24.905 "iscsi_portal_group_set_auth", 00:06:24.905 "iscsi_start_portal_group", 00:06:24.905 "iscsi_delete_portal_group", 00:06:24.905 "iscsi_create_portal_group", 00:06:24.905 "iscsi_get_portal_groups", 00:06:24.905 "iscsi_delete_target_node", 00:06:24.905 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.905 "iscsi_target_node_add_pg_ig_maps", 00:06:24.905 "iscsi_create_target_node", 00:06:24.905 "iscsi_get_target_nodes", 00:06:24.905 "iscsi_delete_initiator_group", 00:06:24.905 "iscsi_initiator_group_remove_initiators", 00:06:24.905 "iscsi_initiator_group_add_initiators", 00:06:24.905 "iscsi_create_initiator_group", 00:06:24.905 "iscsi_get_initiator_groups", 00:06:24.905 "nvmf_set_crdt", 00:06:24.905 "nvmf_set_config", 00:06:24.905 "nvmf_set_max_subsystems", 00:06:24.905 "nvmf_stop_mdns_prr", 00:06:24.905 "nvmf_publish_mdns_prr", 00:06:24.905 "nvmf_subsystem_get_listeners", 00:06:24.905 "nvmf_subsystem_get_qpairs", 00:06:24.905 "nvmf_subsystem_get_controllers", 00:06:24.905 "nvmf_get_stats", 00:06:24.905 "nvmf_get_transports", 00:06:24.905 "nvmf_create_transport", 00:06:24.905 "nvmf_get_targets", 00:06:24.905 "nvmf_delete_target", 00:06:24.905 "nvmf_create_target", 00:06:24.905 "nvmf_subsystem_allow_any_host", 00:06:24.905 "nvmf_subsystem_set_keys", 00:06:24.905 "nvmf_subsystem_remove_host", 00:06:24.905 "nvmf_subsystem_add_host", 00:06:24.905 "nvmf_ns_remove_host", 00:06:24.905 "nvmf_ns_add_host", 00:06:24.905 "nvmf_subsystem_remove_ns", 00:06:24.905 "nvmf_subsystem_set_ns_ana_group", 00:06:24.905 "nvmf_subsystem_add_ns", 00:06:24.905 "nvmf_subsystem_listener_set_ana_state", 00:06:24.905 "nvmf_discovery_get_referrals", 00:06:24.905 "nvmf_discovery_remove_referral", 00:06:24.905 "nvmf_discovery_add_referral", 00:06:24.905 "nvmf_subsystem_remove_listener", 00:06:24.905 "nvmf_subsystem_add_listener", 00:06:24.905 "nvmf_delete_subsystem", 00:06:24.905 "nvmf_create_subsystem", 00:06:24.905 "nvmf_get_subsystems", 00:06:24.905 "env_dpdk_get_mem_stats", 00:06:24.905 "nbd_get_disks", 00:06:24.905 "nbd_stop_disk", 00:06:24.905 "nbd_start_disk", 00:06:24.905 "ublk_recover_disk", 00:06:24.905 "ublk_get_disks", 00:06:24.905 "ublk_stop_disk", 00:06:24.905 "ublk_start_disk", 00:06:24.905 "ublk_destroy_target", 00:06:24.905 "ublk_create_target", 00:06:24.905 "virtio_blk_create_transport", 00:06:24.905 "virtio_blk_get_transports", 00:06:24.905 "vhost_controller_set_coalescing", 00:06:24.905 "vhost_get_controllers", 00:06:24.905 "vhost_delete_controller", 00:06:24.905 "vhost_create_blk_controller", 00:06:24.905 "vhost_scsi_controller_remove_target", 00:06:24.905 "vhost_scsi_controller_add_target", 00:06:24.905 "vhost_start_scsi_controller", 00:06:24.905 "vhost_create_scsi_controller", 00:06:24.905 "thread_set_cpumask", 00:06:24.905 "scheduler_set_options", 00:06:24.905 "framework_get_governor", 00:06:24.905 "framework_get_scheduler", 00:06:24.905 "framework_set_scheduler", 00:06:24.905 "framework_get_reactors", 00:06:24.905 "thread_get_io_channels", 00:06:24.905 "thread_get_pollers", 00:06:24.905 "thread_get_stats", 00:06:24.905 "framework_monitor_context_switch", 00:06:24.905 "spdk_kill_instance", 00:06:24.905 "log_enable_timestamps", 00:06:24.905 "log_get_flags", 00:06:24.905 "log_clear_flag", 00:06:24.905 "log_set_flag", 00:06:24.905 "log_get_level", 00:06:24.905 "log_set_level", 00:06:24.905 "log_get_print_level", 00:06:24.905 "log_set_print_level", 00:06:24.905 "framework_enable_cpumask_locks", 00:06:24.905 "framework_disable_cpumask_locks", 00:06:24.905 "framework_wait_init", 00:06:24.905 "framework_start_init", 00:06:24.905 "scsi_get_devices", 00:06:24.905 "bdev_get_histogram", 00:06:24.905 "bdev_enable_histogram", 00:06:24.905 "bdev_set_qos_limit", 00:06:24.905 "bdev_set_qd_sampling_period", 00:06:24.905 "bdev_get_bdevs", 00:06:24.905 "bdev_reset_iostat", 00:06:24.905 "bdev_get_iostat", 00:06:24.905 "bdev_examine", 00:06:24.905 "bdev_wait_for_examine", 00:06:24.905 "bdev_set_options", 00:06:24.905 "accel_get_stats", 00:06:24.905 "accel_set_options", 00:06:24.905 "accel_set_driver", 00:06:24.905 "accel_crypto_key_destroy", 00:06:24.905 "accel_crypto_keys_get", 00:06:24.905 "accel_crypto_key_create", 00:06:24.905 "accel_assign_opc", 00:06:24.905 "accel_get_module_info", 00:06:24.905 "accel_get_opc_assignments", 00:06:24.905 "vmd_rescan", 00:06:24.905 "vmd_remove_device", 00:06:24.905 "vmd_enable", 00:06:24.905 "sock_get_default_impl", 00:06:24.905 "sock_set_default_impl", 00:06:24.905 "sock_impl_set_options", 00:06:24.905 "sock_impl_get_options", 00:06:24.905 "iobuf_get_stats", 00:06:24.905 "iobuf_set_options", 00:06:24.905 "keyring_get_keys", 00:06:24.905 "framework_get_pci_devices", 00:06:24.905 "framework_get_config", 00:06:24.905 "framework_get_subsystems", 00:06:24.905 "fsdev_set_opts", 00:06:24.905 "fsdev_get_opts", 00:06:24.905 "trace_get_info", 00:06:24.905 "trace_get_tpoint_group_mask", 00:06:24.905 "trace_disable_tpoint_group", 00:06:24.905 "trace_enable_tpoint_group", 00:06:24.905 "trace_clear_tpoint_mask", 00:06:24.905 "trace_set_tpoint_mask", 00:06:24.905 "notify_get_notifications", 00:06:24.905 "notify_get_types", 00:06:24.905 "spdk_get_version", 00:06:24.905 "rpc_get_methods" 00:06:24.905 ] 00:06:24.905 22:23:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.905 22:23:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.905 22:23:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57769 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57769 ']' 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57769 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57769 00:06:24.905 killing process with pid 57769 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57769' 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57769 00:06:24.905 22:23:20 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57769 00:06:28.247 ************************************ 00:06:28.247 END TEST spdkcli_tcp 00:06:28.247 ************************************ 00:06:28.247 00:06:28.247 real 0m5.609s 00:06:28.247 user 0m9.926s 00:06:28.247 sys 0m0.747s 00:06:28.247 22:23:23 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.247 22:23:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 22:23:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.247 22:23:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.247 22:23:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.247 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 ************************************ 00:06:28.247 START TEST dpdk_mem_utility 00:06:28.247 ************************************ 00:06:28.247 22:23:23 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.247 * Looking for test storage... 00:06:28.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:28.247 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.247 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.247 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.247 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.247 22:23:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.506 22:23:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.506 --rc genhtml_branch_coverage=1 00:06:28.506 --rc genhtml_function_coverage=1 00:06:28.506 --rc genhtml_legend=1 00:06:28.506 --rc geninfo_all_blocks=1 00:06:28.506 --rc geninfo_unexecuted_blocks=1 00:06:28.506 00:06:28.506 ' 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.506 --rc genhtml_branch_coverage=1 00:06:28.506 --rc genhtml_function_coverage=1 00:06:28.506 --rc genhtml_legend=1 00:06:28.506 --rc geninfo_all_blocks=1 00:06:28.506 --rc geninfo_unexecuted_blocks=1 00:06:28.506 00:06:28.506 ' 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.506 --rc genhtml_branch_coverage=1 00:06:28.506 --rc genhtml_function_coverage=1 00:06:28.506 --rc genhtml_legend=1 00:06:28.506 --rc geninfo_all_blocks=1 00:06:28.506 --rc geninfo_unexecuted_blocks=1 00:06:28.506 00:06:28.506 ' 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.506 --rc genhtml_branch_coverage=1 00:06:28.506 --rc genhtml_function_coverage=1 00:06:28.506 --rc genhtml_legend=1 00:06:28.506 --rc geninfo_all_blocks=1 00:06:28.506 --rc geninfo_unexecuted_blocks=1 00:06:28.506 00:06:28.506 ' 00:06:28.506 22:23:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.506 22:23:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57908 00:06:28.506 22:23:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.506 22:23:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57908 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57908 ']' 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.506 22:23:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 [2024-09-27 22:23:24.252947] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:28.506 [2024-09-27 22:23:24.253102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57908 ] 00:06:28.764 [2024-09-27 22:23:24.421872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.023 [2024-09-27 22:23:24.651368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.398 22:23:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.398 22:23:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:30.398 22:23:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:30.398 22:23:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:30.398 22:23:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.398 22:23:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.398 { 00:06:30.398 "filename": "/tmp/spdk_mem_dump.txt" 00:06:30.398 } 00:06:30.399 22:23:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.399 22:23:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:30.399 DPDK memory size 1106.000000 MiB in 1 heap(s) 00:06:30.399 1 heaps totaling size 1106.000000 MiB 00:06:30.399 size: 1106.000000 MiB heap id: 0 00:06:30.399 end heaps---------- 00:06:30.399 9 mempools totaling size 883.273621 MiB 00:06:30.399 size: 333.169250 MiB name: bdev_io_57908 00:06:30.399 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:30.399 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:30.399 size: 51.011292 MiB name: evtpool_57908 00:06:30.399 size: 50.003479 MiB name: msgpool_57908 00:06:30.399 size: 36.509338 MiB name: fsdev_io_57908 00:06:30.399 size: 21.763794 MiB name: PDU_Pool 00:06:30.399 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:30.399 size: 0.026123 MiB name: Session_Pool 00:06:30.399 end mempools------- 00:06:30.399 6 memzones totaling size 4.142822 MiB 00:06:30.399 size: 1.000366 MiB name: RG_ring_0_57908 00:06:30.399 size: 1.000366 MiB name: RG_ring_1_57908 00:06:30.399 size: 1.000366 MiB name: RG_ring_4_57908 00:06:30.399 size: 1.000366 MiB name: RG_ring_5_57908 00:06:30.399 size: 0.125366 MiB name: RG_ring_2_57908 00:06:30.399 size: 0.015991 MiB name: RG_ring_3_57908 00:06:30.399 end memzones------- 00:06:30.399 22:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:30.399 heap id: 0 total size: 1106.000000 MiB number of busy elements: 381 number of free elements: 19 00:06:30.399 list of free elements. size: 19.273682 MiB 00:06:30.399 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:30.399 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:30.399 element at address: 0x200009600000 with size: 1.995972 MiB 00:06:30.399 element at address: 0x20000d800000 with size: 1.995972 MiB 00:06:30.399 element at address: 0x200007000000 with size: 1.991028 MiB 00:06:30.399 element at address: 0x20002af00040 with size: 0.999939 MiB 00:06:30.399 element at address: 0x20002b300040 with size: 0.999939 MiB 00:06:30.399 element at address: 0x20002b400000 with size: 0.999084 MiB 00:06:30.399 element at address: 0x200044000000 with size: 0.994324 MiB 00:06:30.399 element at address: 0x20002b700040 with size: 0.936401 MiB 00:06:30.399 element at address: 0x200000200000 with size: 0.829224 MiB 00:06:30.399 element at address: 0x20002ce00000 with size: 0.563904 MiB 00:06:30.399 element at address: 0x20002b000000 with size: 0.489197 MiB 00:06:30.399 element at address: 0x20002b800000 with size: 0.485413 MiB 00:06:30.399 element at address: 0x200003e00000 with size: 0.476746 MiB 00:06:30.399 element at address: 0x20002ac00000 with size: 0.456421 MiB 00:06:30.399 element at address: 0x20003a200000 with size: 0.390442 MiB 00:06:30.399 element at address: 0x200003a00000 with size: 0.350647 MiB 00:06:30.399 element at address: 0x200015e00000 with size: 0.322693 MiB 00:06:30.399 list of standard malloc elements. size: 199.303833 MiB 00:06:30.399 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:06:30.399 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:06:30.399 element at address: 0x20002adfff80 with size: 1.000183 MiB 00:06:30.399 element at address: 0x20002b1fff80 with size: 1.000183 MiB 00:06:30.399 element at address: 0x20002b5fff80 with size: 1.000183 MiB 00:06:30.399 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:30.399 element at address: 0x20002b7eff40 with size: 0.062683 MiB 00:06:30.399 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:30.399 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:06:30.399 element at address: 0x20002b7efdc0 with size: 0.000366 MiB 00:06:30.399 element at address: 0x200015dff040 with size: 0.000305 MiB 00:06:30.399 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:30.399 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e0c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e1c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e2c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e3c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e4c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e5c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e6c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e7c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e8c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003aff700 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a0c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a1c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a2c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a3c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a4c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a5c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a6c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a7c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a8c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7a9c0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7aac0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7abc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7acc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7adc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7aec0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7afc0 with size: 0.000244 MiB 00:06:30.399 element at address: 0x200003e7b0c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b1c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b2c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b3c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b4c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b5c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b6c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b7c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b8c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7b9c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7bac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7bbc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7bcc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7bdc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7bec0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7bfc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c0c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c1c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c2c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c3c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c4c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c5c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c6c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c7c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c8c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7c9c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7cac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7cbc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7ccc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7cdc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7cec0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7cfc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d0c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d1c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d2c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d3c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d4c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d5c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d6c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff180 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff280 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff380 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff480 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff580 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff680 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff780 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff880 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dff980 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x200015e529c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac74d80 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac74e80 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac74f80 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75080 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75180 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75280 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75380 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75480 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75580 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75680 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ac75780 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002acfdd00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d3c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d4c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d5c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d6c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d7c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d8c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b07d9c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b0fdd00 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b4ffc40 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b7efbc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b7efcc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002b8bc680 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce905c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce906c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce907c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce908c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce909c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce90ac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce90bc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce90cc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce90dc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce90ec0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce90fc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce910c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce911c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce912c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce913c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce914c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce915c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce916c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce917c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce918c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce919c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce91ac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce91bc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce91cc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce91dc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce91ec0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce91fc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce920c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce921c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce922c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce923c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce924c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce925c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce926c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce927c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce928c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce929c0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce92ac0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce92bc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce92cc0 with size: 0.000244 MiB 00:06:30.400 element at address: 0x20002ce92dc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce92ec0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce92fc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce930c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce931c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce932c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce933c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce934c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce935c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce936c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce937c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce938c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce939c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce93ac0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce93bc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce93cc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce93dc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce93ec0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce93fc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce940c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce941c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce942c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce943c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce944c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce945c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce946c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce947c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce948c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce949c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce94ac0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce94bc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce94cc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce94dc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce94ec0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce94fc0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce950c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce951c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce952c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20002ce953c0 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a263f40 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a264040 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ad00 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26af80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b080 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b180 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b280 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b380 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b480 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b580 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b680 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b780 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b880 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26b980 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ba80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26bb80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26bc80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26bd80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26be80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26bf80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c080 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c180 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c280 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c380 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c480 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c580 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c680 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c780 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c880 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26c980 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ca80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26cb80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26cc80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26cd80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ce80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26cf80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d080 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d180 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d280 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d380 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d480 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d580 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d680 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d780 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d880 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26d980 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26da80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26db80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26dc80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26dd80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26de80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26df80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e080 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e180 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e280 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e380 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e480 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e580 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e680 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e780 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e880 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26e980 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ea80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26eb80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ec80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ed80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ee80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26ef80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f080 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f180 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f280 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f380 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f480 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f580 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f680 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f780 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f880 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26f980 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26fa80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26fb80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26fc80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26fd80 with size: 0.000244 MiB 00:06:30.401 element at address: 0x20003a26fe80 with size: 0.000244 MiB 00:06:30.401 list of memzone associated elements. size: 887.422485 MiB 00:06:30.401 element at address: 0x200015f54c40 with size: 332.668884 MiB 00:06:30.401 associated memzone info: size: 332.668701 MiB name: MP_bdev_io_57908_0 00:06:30.401 element at address: 0x20002ce954c0 with size: 211.416809 MiB 00:06:30.401 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:30.401 element at address: 0x20003a26ff80 with size: 157.562622 MiB 00:06:30.401 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:30.401 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:30.401 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57908_0 00:06:30.401 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:30.401 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57908_0 00:06:30.401 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:06:30.401 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57908_0 00:06:30.401 element at address: 0x20002b9be900 with size: 20.255615 MiB 00:06:30.401 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:30.401 element at address: 0x2000441feb00 with size: 18.005127 MiB 00:06:30.401 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:30.401 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:30.401 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57908 00:06:30.401 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:30.401 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57908 00:06:30.401 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:30.401 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57908 00:06:30.401 element at address: 0x20002b0fde00 with size: 1.008179 MiB 00:06:30.401 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:30.401 element at address: 0x20002b8bc780 with size: 1.008179 MiB 00:06:30.401 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:30.401 element at address: 0x20002acfde00 with size: 1.008179 MiB 00:06:30.401 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:30.401 element at address: 0x200015e52ac0 with size: 1.008179 MiB 00:06:30.401 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:30.401 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:30.401 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57908 00:06:30.401 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:30.402 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57908 00:06:30.402 element at address: 0x20002b4ffd40 with size: 1.000549 MiB 00:06:30.402 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57908 00:06:30.402 element at address: 0x2000440fe8c0 with size: 1.000549 MiB 00:06:30.402 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57908 00:06:30.402 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:06:30.402 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57908 00:06:30.402 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:06:30.402 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57908 00:06:30.402 element at address: 0x20002b07dac0 with size: 0.500549 MiB 00:06:30.402 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:30.402 element at address: 0x20002ac75880 with size: 0.500549 MiB 00:06:30.402 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:30.402 element at address: 0x20002b87c440 with size: 0.250549 MiB 00:06:30.402 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:30.402 element at address: 0x200003a5de80 with size: 0.125549 MiB 00:06:30.402 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57908 00:06:30.402 element at address: 0x20002acf5ac0 with size: 0.031799 MiB 00:06:30.402 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:30.402 element at address: 0x20003a264140 with size: 0.023804 MiB 00:06:30.402 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:30.402 element at address: 0x200003a59c40 with size: 0.016174 MiB 00:06:30.402 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57908 00:06:30.402 element at address: 0x20003a26a2c0 with size: 0.002502 MiB 00:06:30.402 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:30.402 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:30.402 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57908 00:06:30.402 element at address: 0x200003aff800 with size: 0.000366 MiB 00:06:30.402 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57908 00:06:30.402 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:06:30.402 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57908 00:06:30.402 element at address: 0x20003a26ae00 with size: 0.000366 MiB 00:06:30.402 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:30.402 22:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:30.402 22:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57908 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57908 ']' 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57908 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57908 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.402 killing process with pid 57908 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57908' 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57908 00:06:30.402 22:23:26 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57908 00:06:33.686 ************************************ 00:06:33.686 END TEST dpdk_mem_utility 00:06:33.686 ************************************ 00:06:33.686 00:06:33.686 real 0m5.461s 00:06:33.686 user 0m5.313s 00:06:33.686 sys 0m0.658s 00:06:33.686 22:23:29 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.686 22:23:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.686 22:23:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:33.686 22:23:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.686 22:23:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.686 22:23:29 -- common/autotest_common.sh@10 -- # set +x 00:06:33.686 ************************************ 00:06:33.686 START TEST event 00:06:33.686 ************************************ 00:06:33.686 22:23:29 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:33.686 * Looking for test storage... 00:06:33.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.945 22:23:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.945 22:23:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.945 22:23:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.945 22:23:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.945 22:23:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.945 22:23:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.945 22:23:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.945 22:23:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.945 22:23:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.945 22:23:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.945 22:23:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.945 22:23:29 event -- scripts/common.sh@344 -- # case "$op" in 00:06:33.945 22:23:29 event -- scripts/common.sh@345 -- # : 1 00:06:33.945 22:23:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.945 22:23:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.945 22:23:29 event -- scripts/common.sh@365 -- # decimal 1 00:06:33.945 22:23:29 event -- scripts/common.sh@353 -- # local d=1 00:06:33.945 22:23:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.945 22:23:29 event -- scripts/common.sh@355 -- # echo 1 00:06:33.945 22:23:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.945 22:23:29 event -- scripts/common.sh@366 -- # decimal 2 00:06:33.945 22:23:29 event -- scripts/common.sh@353 -- # local d=2 00:06:33.945 22:23:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.945 22:23:29 event -- scripts/common.sh@355 -- # echo 2 00:06:33.945 22:23:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.945 22:23:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.945 22:23:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.945 22:23:29 event -- scripts/common.sh@368 -- # return 0 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.945 --rc genhtml_branch_coverage=1 00:06:33.945 --rc genhtml_function_coverage=1 00:06:33.945 --rc genhtml_legend=1 00:06:33.945 --rc geninfo_all_blocks=1 00:06:33.945 --rc geninfo_unexecuted_blocks=1 00:06:33.945 00:06:33.945 ' 00:06:33.945 22:23:29 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.946 --rc genhtml_branch_coverage=1 00:06:33.946 --rc genhtml_function_coverage=1 00:06:33.946 --rc genhtml_legend=1 00:06:33.946 --rc geninfo_all_blocks=1 00:06:33.946 --rc geninfo_unexecuted_blocks=1 00:06:33.946 00:06:33.946 ' 00:06:33.946 22:23:29 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.946 --rc genhtml_branch_coverage=1 00:06:33.946 --rc genhtml_function_coverage=1 00:06:33.946 --rc genhtml_legend=1 00:06:33.946 --rc geninfo_all_blocks=1 00:06:33.946 --rc geninfo_unexecuted_blocks=1 00:06:33.946 00:06:33.946 ' 00:06:33.946 22:23:29 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.946 --rc genhtml_branch_coverage=1 00:06:33.946 --rc genhtml_function_coverage=1 00:06:33.946 --rc genhtml_legend=1 00:06:33.946 --rc geninfo_all_blocks=1 00:06:33.946 --rc geninfo_unexecuted_blocks=1 00:06:33.946 00:06:33.946 ' 00:06:33.946 22:23:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:33.946 22:23:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:33.946 22:23:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:33.946 22:23:29 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:33.946 22:23:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.946 22:23:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.946 ************************************ 00:06:33.946 START TEST event_perf 00:06:33.946 ************************************ 00:06:33.946 22:23:29 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:33.946 Running I/O for 1 seconds...[2024-09-27 22:23:29.732312] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:33.946 [2024-09-27 22:23:29.732530] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58031 ] 00:06:34.204 [2024-09-27 22:23:29.904502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.463 [2024-09-27 22:23:30.147794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.463 [2024-09-27 22:23:30.148050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.463 [2024-09-27 22:23:30.148165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.463 Running I/O for 1 seconds...[2024-09-27 22:23:30.148196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.839 00:06:35.839 lcore 0: 193288 00:06:35.839 lcore 1: 193289 00:06:35.839 lcore 2: 193290 00:06:35.839 lcore 3: 193290 00:06:35.839 done. 00:06:35.839 00:06:35.839 real 0m1.882s 00:06:35.839 user 0m4.613s 00:06:35.839 sys 0m0.138s 00:06:35.839 22:23:31 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.839 22:23:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.839 ************************************ 00:06:35.839 END TEST event_perf 00:06:35.839 ************************************ 00:06:35.839 22:23:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:35.839 22:23:31 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:35.839 22:23:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.839 22:23:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.839 ************************************ 00:06:35.839 START TEST event_reactor 00:06:35.839 ************************************ 00:06:35.839 22:23:31 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:35.839 [2024-09-27 22:23:31.688830] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:35.839 [2024-09-27 22:23:31.689116] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58072 ] 00:06:36.105 [2024-09-27 22:23:31.861501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.363 [2024-09-27 22:23:32.099193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.741 test_start 00:06:37.741 oneshot 00:06:37.741 tick 100 00:06:37.741 tick 100 00:06:37.741 tick 250 00:06:37.741 tick 100 00:06:37.741 tick 100 00:06:37.741 tick 100 00:06:37.741 tick 250 00:06:37.741 tick 500 00:06:37.741 tick 100 00:06:37.741 tick 100 00:06:37.741 tick 250 00:06:37.741 tick 100 00:06:37.741 tick 100 00:06:37.741 test_end 00:06:37.741 00:06:37.741 real 0m1.869s 00:06:37.741 user 0m1.640s 00:06:37.741 sys 0m0.119s 00:06:37.741 22:23:33 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.741 ************************************ 00:06:37.741 END TEST event_reactor 00:06:37.741 ************************************ 00:06:37.741 22:23:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:37.741 22:23:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.741 22:23:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:37.741 22:23:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.741 22:23:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.741 ************************************ 00:06:37.741 START TEST event_reactor_perf 00:06:37.741 ************************************ 00:06:37.741 22:23:33 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.998 [2024-09-27 22:23:33.636091] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:37.998 [2024-09-27 22:23:33.636233] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58114 ] 00:06:37.998 [2024-09-27 22:23:33.806942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.255 [2024-09-27 22:23:34.038675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.628 test_start 00:06:39.628 test_end 00:06:39.628 Performance: 372115 events per second 00:06:39.628 ************************************ 00:06:39.628 END TEST event_reactor_perf 00:06:39.628 ************************************ 00:06:39.628 00:06:39.628 real 0m1.871s 00:06:39.628 user 0m1.632s 00:06:39.628 sys 0m0.128s 00:06:39.628 22:23:35 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.628 22:23:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.886 22:23:35 event -- event/event.sh@49 -- # uname -s 00:06:39.886 22:23:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:39.886 22:23:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.886 22:23:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.886 22:23:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.886 22:23:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.886 ************************************ 00:06:39.886 START TEST event_scheduler 00:06:39.886 ************************************ 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.886 * Looking for test storage... 00:06:39.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.886 22:23:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.886 22:23:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.886 --rc genhtml_branch_coverage=1 00:06:39.886 --rc genhtml_function_coverage=1 00:06:39.886 --rc genhtml_legend=1 00:06:39.887 --rc geninfo_all_blocks=1 00:06:39.887 --rc geninfo_unexecuted_blocks=1 00:06:39.887 00:06:39.887 ' 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.887 --rc genhtml_branch_coverage=1 00:06:39.887 --rc genhtml_function_coverage=1 00:06:39.887 --rc genhtml_legend=1 00:06:39.887 --rc geninfo_all_blocks=1 00:06:39.887 --rc geninfo_unexecuted_blocks=1 00:06:39.887 00:06:39.887 ' 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.887 --rc genhtml_branch_coverage=1 00:06:39.887 --rc genhtml_function_coverage=1 00:06:39.887 --rc genhtml_legend=1 00:06:39.887 --rc geninfo_all_blocks=1 00:06:39.887 --rc geninfo_unexecuted_blocks=1 00:06:39.887 00:06:39.887 ' 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.887 --rc genhtml_branch_coverage=1 00:06:39.887 --rc genhtml_function_coverage=1 00:06:39.887 --rc genhtml_legend=1 00:06:39.887 --rc geninfo_all_blocks=1 00:06:39.887 --rc geninfo_unexecuted_blocks=1 00:06:39.887 00:06:39.887 ' 00:06:39.887 22:23:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:39.887 22:23:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58190 00:06:39.887 22:23:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:39.887 22:23:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.887 22:23:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58190 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58190 ']' 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.887 22:23:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.152 [2024-09-27 22:23:35.852953] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:40.152 [2024-09-27 22:23:35.853165] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:06:40.415 [2024-09-27 22:23:36.047885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.415 [2024-09-27 22:23:36.280834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.415 [2024-09-27 22:23:36.281049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.415 [2024-09-27 22:23:36.281186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.415 [2024-09-27 22:23:36.281212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:40.981 22:23:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.981 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.981 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.981 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.981 POWER: Cannot set governor of lcore 0 to performance 00:06:40.981 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.981 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.981 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.981 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.981 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:40.981 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:40.981 POWER: Unable to set Power Management Environment for lcore 0 00:06:40.981 [2024-09-27 22:23:36.687088] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:40.981 [2024-09-27 22:23:36.687113] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:40.981 [2024-09-27 22:23:36.687126] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:40.981 [2024-09-27 22:23:36.687168] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:40.981 [2024-09-27 22:23:36.687182] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:40.981 [2024-09-27 22:23:36.687195] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.981 22:23:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.981 22:23:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.622 [2024-09-27 22:23:37.399222] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:41.622 22:23:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.622 22:23:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:41.622 22:23:37 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.622 22:23:37 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.622 22:23:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.622 ************************************ 00:06:41.622 START TEST scheduler_create_thread 00:06:41.622 ************************************ 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.622 2 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.622 3 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.622 4 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.622 5 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.622 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 6 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 7 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 8 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 9 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 10 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.881 22:23:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.818 22:23:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.818 22:23:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.818 22:23:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.818 22:23:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.753 22:23:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.753 22:23:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:43.753 22:23:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:43.753 22:23:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.753 22:23:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.691 ************************************ 00:06:44.691 END TEST scheduler_create_thread 00:06:44.691 ************************************ 00:06:44.691 22:23:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.691 00:06:44.691 real 0m2.898s 00:06:44.691 user 0m0.023s 00:06:44.691 sys 0m0.009s 00:06:44.691 22:23:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.691 22:23:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.691 22:23:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:44.691 22:23:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58190 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58190 ']' 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58190 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58190 00:06:44.691 killing process with pid 58190 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58190' 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58190 00:06:44.691 22:23:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58190 00:06:44.952 [2024-09-27 22:23:40.788488] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.563 00:06:47.563 real 0m7.483s 00:06:47.563 user 0m15.770s 00:06:47.563 sys 0m0.656s 00:06:47.563 22:23:43 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.563 ************************************ 00:06:47.563 END TEST event_scheduler 00:06:47.563 ************************************ 00:06:47.563 22:23:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.563 22:23:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.563 22:23:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.563 22:23:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.563 22:23:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.563 22:23:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.563 ************************************ 00:06:47.563 START TEST app_repeat 00:06:47.563 ************************************ 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58318 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58318' 00:06:47.563 Process app_repeat pid: 58318 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.563 spdk_app_start Round 0 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:47.563 22:23:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58318 /var/tmp/spdk-nbd.sock 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58318 ']' 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.563 22:23:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.563 [2024-09-27 22:23:43.155753] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:06:47.563 [2024-09-27 22:23:43.155933] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:06:47.563 [2024-09-27 22:23:43.337880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.823 [2024-09-27 22:23:43.593478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.823 [2024-09-27 22:23:43.593488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.761 22:23:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.761 22:23:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:48.761 22:23:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.761 Malloc0 00:06:49.021 22:23:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.280 Malloc1 00:06:49.280 22:23:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.280 22:23:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.539 /dev/nbd0 00:06:49.539 22:23:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.539 22:23:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.539 1+0 records in 00:06:49.539 1+0 records out 00:06:49.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278776 s, 14.7 MB/s 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.539 22:23:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.539 22:23:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.539 22:23:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.539 22:23:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.798 /dev/nbd1 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.798 1+0 records in 00:06:49.798 1+0 records out 00:06:49.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414635 s, 9.9 MB/s 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.798 22:23:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.798 22:23:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.058 { 00:06:50.058 "nbd_device": "/dev/nbd0", 00:06:50.058 "bdev_name": "Malloc0" 00:06:50.058 }, 00:06:50.058 { 00:06:50.058 "nbd_device": "/dev/nbd1", 00:06:50.058 "bdev_name": "Malloc1" 00:06:50.058 } 00:06:50.058 ]' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.058 { 00:06:50.058 "nbd_device": "/dev/nbd0", 00:06:50.058 "bdev_name": "Malloc0" 00:06:50.058 }, 00:06:50.058 { 00:06:50.058 "nbd_device": "/dev/nbd1", 00:06:50.058 "bdev_name": "Malloc1" 00:06:50.058 } 00:06:50.058 ]' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.058 /dev/nbd1' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.058 /dev/nbd1' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.058 256+0 records in 00:06:50.058 256+0 records out 00:06:50.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133909 s, 78.3 MB/s 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.058 22:23:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.318 256+0 records in 00:06:50.318 256+0 records out 00:06:50.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317976 s, 33.0 MB/s 00:06:50.318 22:23:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.318 22:23:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.318 256+0 records in 00:06:50.318 256+0 records out 00:06:50.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322991 s, 32.5 MB/s 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.318 22:23:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.578 22:23:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.838 22:23:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.097 22:23:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.098 22:23:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.098 22:23:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.726 22:23:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.631 [2024-09-27 22:23:49.338899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.891 [2024-09-27 22:23:49.564815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.891 [2024-09-27 22:23:49.564817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.149 [2024-09-27 22:23:49.793461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.149 [2024-09-27 22:23:49.793540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.717 22:23:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.717 spdk_app_start Round 1 00:06:54.717 22:23:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:54.717 22:23:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58318 /var/tmp/spdk-nbd.sock 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58318 ']' 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.717 22:23:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:54.717 22:23:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.977 Malloc0 00:06:54.977 22:23:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.546 Malloc1 00:06:55.546 22:23:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:55.546 /dev/nbd0 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.546 22:23:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.546 22:23:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:55.546 22:23:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:55.546 22:23:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.546 22:23:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.546 22:23:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:55.805 1+0 records in 00:06:55.805 1+0 records out 00:06:55.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314194 s, 13.0 MB/s 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.805 22:23:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:55.805 22:23:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.805 22:23:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.805 22:23:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:55.805 /dev/nbd1 00:06:56.063 22:23:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.064 22:23:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.064 1+0 records in 00:06:56.064 1+0 records out 00:06:56.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036231 s, 11.3 MB/s 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.064 22:23:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:56.064 22:23:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.064 22:23:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.064 22:23:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.064 22:23:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.064 22:23:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.323 { 00:06:56.323 "nbd_device": "/dev/nbd0", 00:06:56.323 "bdev_name": "Malloc0" 00:06:56.323 }, 00:06:56.323 { 00:06:56.323 "nbd_device": "/dev/nbd1", 00:06:56.323 "bdev_name": "Malloc1" 00:06:56.323 } 00:06:56.323 ]' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.323 { 00:06:56.323 "nbd_device": "/dev/nbd0", 00:06:56.323 "bdev_name": "Malloc0" 00:06:56.323 }, 00:06:56.323 { 00:06:56.323 "nbd_device": "/dev/nbd1", 00:06:56.323 "bdev_name": "Malloc1" 00:06:56.323 } 00:06:56.323 ]' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.323 /dev/nbd1' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.323 /dev/nbd1' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.323 22:23:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.323 256+0 records in 00:06:56.323 256+0 records out 00:06:56.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107851 s, 97.2 MB/s 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.323 256+0 records in 00:06:56.323 256+0 records out 00:06:56.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299301 s, 35.0 MB/s 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.323 256+0 records in 00:06:56.323 256+0 records out 00:06:56.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358339 s, 29.3 MB/s 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.323 22:23:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.582 22:23:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.847 22:23:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.106 22:23:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.106 22:23:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.675 22:23:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.580 [2024-09-27 22:23:55.447743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.838 [2024-09-27 22:23:55.671664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.838 [2024-09-27 22:23:55.671681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.097 [2024-09-27 22:23:55.900104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.097 [2024-09-27 22:23:55.900184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.665 22:23:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.665 spdk_app_start Round 2 00:07:00.665 22:23:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:00.665 22:23:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58318 /var/tmp/spdk-nbd.sock 00:07:00.665 22:23:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58318 ']' 00:07:00.665 22:23:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.665 22:23:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.665 22:23:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.665 22:23:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.665 22:23:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.925 22:23:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.925 22:23:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:00.925 22:23:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.184 Malloc0 00:07:01.184 22:23:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.444 Malloc1 00:07:01.444 22:23:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.444 22:23:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.703 /dev/nbd0 00:07:01.703 22:23:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.703 22:23:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.703 1+0 records in 00:07:01.703 1+0 records out 00:07:01.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335493 s, 12.2 MB/s 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.703 22:23:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:01.703 22:23:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.703 22:23:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.703 22:23:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.987 /dev/nbd1 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.987 1+0 records in 00:07:01.987 1+0 records out 00:07:01.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320368 s, 12.8 MB/s 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.987 22:23:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.987 22:23:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.264 { 00:07:02.264 "nbd_device": "/dev/nbd0", 00:07:02.264 "bdev_name": "Malloc0" 00:07:02.264 }, 00:07:02.264 { 00:07:02.264 "nbd_device": "/dev/nbd1", 00:07:02.264 "bdev_name": "Malloc1" 00:07:02.264 } 00:07:02.264 ]' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.264 { 00:07:02.264 "nbd_device": "/dev/nbd0", 00:07:02.264 "bdev_name": "Malloc0" 00:07:02.264 }, 00:07:02.264 { 00:07:02.264 "nbd_device": "/dev/nbd1", 00:07:02.264 "bdev_name": "Malloc1" 00:07:02.264 } 00:07:02.264 ]' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.264 /dev/nbd1' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.264 /dev/nbd1' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.264 22:23:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.264 256+0 records in 00:07:02.264 256+0 records out 00:07:02.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134702 s, 77.8 MB/s 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.264 256+0 records in 00:07:02.264 256+0 records out 00:07:02.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293184 s, 35.8 MB/s 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.264 256+0 records in 00:07:02.264 256+0 records out 00:07:02.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349221 s, 30.0 MB/s 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.264 22:23:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.265 22:23:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.524 22:23:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.783 22:23:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.042 22:23:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.042 22:23:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.608 22:23:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.509 [2024-09-27 22:24:01.328874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.769 [2024-09-27 22:24:01.554553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.769 [2024-09-27 22:24:01.554554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.029 [2024-09-27 22:24:01.780167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.029 [2024-09-27 22:24:01.780258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.596 22:24:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58318 /var/tmp/spdk-nbd.sock 00:07:06.596 22:24:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58318 ']' 00:07:06.596 22:24:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.596 22:24:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.596 22:24:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.596 22:24:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.596 22:24:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:06.854 22:24:02 event.app_repeat -- event/event.sh@39 -- # killprocess 58318 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58318 ']' 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58318 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58318 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58318' 00:07:06.854 killing process with pid 58318 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58318 00:07:06.854 22:24:02 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58318 00:07:08.754 spdk_app_start is called in Round 0. 00:07:08.754 Shutdown signal received, stop current app iteration 00:07:08.754 Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 reinitialization... 00:07:08.754 spdk_app_start is called in Round 1. 00:07:08.754 Shutdown signal received, stop current app iteration 00:07:08.755 Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 reinitialization... 00:07:08.755 spdk_app_start is called in Round 2. 00:07:08.755 Shutdown signal received, stop current app iteration 00:07:08.755 Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 reinitialization... 00:07:08.755 spdk_app_start is called in Round 3. 00:07:08.755 Shutdown signal received, stop current app iteration 00:07:08.755 22:24:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:08.755 22:24:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:08.755 00:07:08.755 real 0m21.535s 00:07:08.755 user 0m44.608s 00:07:08.755 sys 0m3.538s 00:07:08.755 22:24:04 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.755 22:24:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.755 ************************************ 00:07:08.755 END TEST app_repeat 00:07:08.755 ************************************ 00:07:09.013 22:24:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:09.013 22:24:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:09.013 22:24:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.013 22:24:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.013 22:24:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.013 ************************************ 00:07:09.013 START TEST cpu_locks 00:07:09.013 ************************************ 00:07:09.013 22:24:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:09.013 * Looking for test storage... 00:07:09.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:09.013 22:24:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:09.013 22:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:09.013 22:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:09.271 22:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.271 22:24:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:09.271 22:24:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.271 22:24:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:09.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.271 --rc genhtml_branch_coverage=1 00:07:09.271 --rc genhtml_function_coverage=1 00:07:09.271 --rc genhtml_legend=1 00:07:09.271 --rc geninfo_all_blocks=1 00:07:09.271 --rc geninfo_unexecuted_blocks=1 00:07:09.271 00:07:09.271 ' 00:07:09.271 22:24:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:09.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.271 --rc genhtml_branch_coverage=1 00:07:09.271 --rc genhtml_function_coverage=1 00:07:09.271 --rc genhtml_legend=1 00:07:09.271 --rc geninfo_all_blocks=1 00:07:09.271 --rc geninfo_unexecuted_blocks=1 00:07:09.271 00:07:09.271 ' 00:07:09.271 22:24:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:09.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.272 --rc genhtml_branch_coverage=1 00:07:09.272 --rc genhtml_function_coverage=1 00:07:09.272 --rc genhtml_legend=1 00:07:09.272 --rc geninfo_all_blocks=1 00:07:09.272 --rc geninfo_unexecuted_blocks=1 00:07:09.272 00:07:09.272 ' 00:07:09.272 22:24:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:09.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.272 --rc genhtml_branch_coverage=1 00:07:09.272 --rc genhtml_function_coverage=1 00:07:09.272 --rc genhtml_legend=1 00:07:09.272 --rc geninfo_all_blocks=1 00:07:09.272 --rc geninfo_unexecuted_blocks=1 00:07:09.272 00:07:09.272 ' 00:07:09.272 22:24:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:09.272 22:24:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:09.272 22:24:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:09.272 22:24:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:09.272 22:24:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.272 22:24:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.272 22:24:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.272 ************************************ 00:07:09.272 START TEST default_locks 00:07:09.272 ************************************ 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58789 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58789 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58789 ']' 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.272 22:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.272 [2024-09-27 22:24:05.051603] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:09.272 [2024-09-27 22:24:05.051848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58789 ] 00:07:09.530 [2024-09-27 22:24:05.246932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.789 [2024-09-27 22:24:05.504369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.166 22:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.166 22:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:11.166 22:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58789 00:07:11.166 22:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58789 00:07:11.166 22:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.732 22:24:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58789 00:07:11.732 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58789 ']' 00:07:11.732 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58789 00:07:11.732 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:11.732 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.732 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58789 00:07:11.991 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.991 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.991 killing process with pid 58789 00:07:11.991 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58789' 00:07:11.991 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58789 00:07:11.991 22:24:07 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58789 00:07:15.278 22:24:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58789 00:07:15.278 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:15.278 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58789 00:07:15.278 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:15.278 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.278 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:15.536 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.536 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58789 00:07:15.536 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58789 ']' 00:07:15.536 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58789) - No such process 00:07:15.537 ERROR: process (pid: 58789) is no longer running 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.537 00:07:15.537 real 0m6.228s 00:07:15.537 user 0m6.175s 00:07:15.537 sys 0m0.959s 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.537 22:24:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.537 ************************************ 00:07:15.537 END TEST default_locks 00:07:15.537 ************************************ 00:07:15.537 22:24:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:15.537 22:24:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.537 22:24:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.537 22:24:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.537 ************************************ 00:07:15.537 START TEST default_locks_via_rpc 00:07:15.537 ************************************ 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58887 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58887 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58887 ']' 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.537 22:24:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.537 [2024-09-27 22:24:11.344831] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:15.537 [2024-09-27 22:24:11.344996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:07:15.796 [2024-09-27 22:24:11.508103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.055 [2024-09-27 22:24:11.793696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.432 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.432 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58887 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58887 00:07:17.433 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.040 22:24:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58887 00:07:18.040 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58887 ']' 00:07:18.040 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58887 00:07:18.040 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:18.040 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.040 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58887 00:07:18.299 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.299 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.299 killing process with pid 58887 00:07:18.299 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58887' 00:07:18.299 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58887 00:07:18.299 22:24:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58887 00:07:21.670 00:07:21.670 real 0m6.065s 00:07:21.670 user 0m6.029s 00:07:21.670 sys 0m0.952s 00:07:21.670 22:24:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.670 22:24:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.670 ************************************ 00:07:21.670 END TEST default_locks_via_rpc 00:07:21.670 ************************************ 00:07:21.670 22:24:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:21.670 22:24:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.670 22:24:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.670 22:24:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.670 ************************************ 00:07:21.670 START TEST non_locking_app_on_locked_coremask 00:07:21.670 ************************************ 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58985 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58985 /var/tmp/spdk.sock 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58985 ']' 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.670 22:24:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.670 [2024-09-27 22:24:17.494678] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:21.670 [2024-09-27 22:24:17.494829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:07:21.929 [2024-09-27 22:24:17.672200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.188 [2024-09-27 22:24:17.915175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59022 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59022 /var/tmp/spdk2.sock 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59022 ']' 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.722 22:24:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.722 [2024-09-27 22:24:20.168425] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:24.722 [2024-09-27 22:24:20.169015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59022 ] 00:07:24.722 [2024-09-27 22:24:20.343631] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.722 [2024-09-27 22:24:20.343707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.982 [2024-09-27 22:24:20.818600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.518 22:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.518 22:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:27.518 22:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58985 00:07:27.518 22:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58985 00:07:27.518 22:24:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58985 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58985 ']' 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58985 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58985 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.422 killing process with pid 58985 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58985' 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58985 00:07:29.422 22:24:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58985 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59022 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59022 ']' 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59022 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59022 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.988 killing process with pid 59022 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59022' 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59022 00:07:35.988 22:24:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59022 00:07:39.296 00:07:39.296 real 0m17.340s 00:07:39.296 user 0m17.679s 00:07:39.296 sys 0m2.048s 00:07:39.296 22:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.296 22:24:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.296 ************************************ 00:07:39.296 END TEST non_locking_app_on_locked_coremask 00:07:39.296 ************************************ 00:07:39.296 22:24:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:39.296 22:24:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.296 22:24:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.296 22:24:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.296 ************************************ 00:07:39.296 START TEST locking_app_on_unlocked_coremask 00:07:39.296 ************************************ 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59214 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59214 /var/tmp/spdk.sock 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59214 ']' 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.296 22:24:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.296 [2024-09-27 22:24:34.883691] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:39.296 [2024-09-27 22:24:34.883833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:07:39.296 [2024-09-27 22:24:35.055615] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.296 [2024-09-27 22:24:35.055690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.554 [2024-09-27 22:24:35.292180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59241 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59241 /var/tmp/spdk2.sock 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59241 ']' 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.967 22:24:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.967 [2024-09-27 22:24:36.732034] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:40.967 [2024-09-27 22:24:36.732165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:07:41.225 [2024-09-27 22:24:36.898652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.791 [2024-09-27 22:24:37.363666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.322 22:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.322 22:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:44.322 22:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59241 00:07:44.322 22:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59241 00:07:44.322 22:24:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59214 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59214 ']' 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59214 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59214 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.696 killing process with pid 59214 00:07:45.696 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.697 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59214' 00:07:45.697 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59214 00:07:45.697 22:24:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59214 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59241 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59241 ']' 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59241 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59241 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.309 killing process with pid 59241 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59241' 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59241 00:07:52.309 22:24:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59241 00:07:55.686 00:07:55.686 real 0m16.402s 00:07:55.686 user 0m16.583s 00:07:55.686 sys 0m1.865s 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.686 ************************************ 00:07:55.686 END TEST locking_app_on_unlocked_coremask 00:07:55.686 ************************************ 00:07:55.686 22:24:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:55.686 22:24:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.686 22:24:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.686 22:24:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.686 ************************************ 00:07:55.686 START TEST locking_app_on_locked_coremask 00:07:55.686 ************************************ 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59434 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59434 /var/tmp/spdk.sock 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59434 ']' 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.686 22:24:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.686 [2024-09-27 22:24:51.372404] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:55.686 [2024-09-27 22:24:51.372602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59434 ] 00:07:55.686 [2024-09-27 22:24:51.545242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.945 [2024-09-27 22:24:51.793803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59457 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59457 /var/tmp/spdk2.sock 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59457 /var/tmp/spdk2.sock 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.364 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59457 /var/tmp/spdk2.sock 00:07:57.365 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59457 ']' 00:07:57.365 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.365 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.365 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.365 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.365 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.365 [2024-09-27 22:24:53.215602] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:07:57.365 [2024-09-27 22:24:53.215756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:07:57.635 [2024-09-27 22:24:53.389606] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59434 has claimed it. 00:07:57.635 [2024-09-27 22:24:53.389699] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:58.202 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59457) - No such process 00:07:58.202 ERROR: process (pid: 59457) is no longer running 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59434 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59434 00:07:58.202 22:24:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59434 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59434 ']' 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59434 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59434 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.767 killing process with pid 59434 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59434' 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59434 00:07:58.767 22:24:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59434 00:08:02.961 00:08:02.961 real 0m6.722s 00:08:02.961 user 0m6.825s 00:08:02.961 sys 0m1.138s 00:08:02.961 22:24:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.961 22:24:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.961 ************************************ 00:08:02.961 END TEST locking_app_on_locked_coremask 00:08:02.961 ************************************ 00:08:02.961 22:24:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:02.961 22:24:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.961 22:24:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.961 22:24:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.961 ************************************ 00:08:02.961 START TEST locking_overlapped_coremask 00:08:02.961 ************************************ 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59543 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59543 /var/tmp/spdk.sock 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59543 ']' 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.961 22:24:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.961 [2024-09-27 22:24:58.161700] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:02.961 [2024-09-27 22:24:58.161840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59543 ] 00:08:02.961 [2024-09-27 22:24:58.332600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.961 [2024-09-27 22:24:58.572728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.961 [2024-09-27 22:24:58.572881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.961 [2024-09-27 22:24:58.572930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59573 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59573 /var/tmp/spdk2.sock 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59573 /var/tmp/spdk2.sock 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59573 /var/tmp/spdk2.sock 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59573 ']' 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.338 22:24:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.338 [2024-09-27 22:24:59.987595] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:04.338 [2024-09-27 22:24:59.987746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59573 ] 00:08:04.338 [2024-09-27 22:25:00.157035] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59543 has claimed it. 00:08:04.338 [2024-09-27 22:25:00.157105] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:04.906 ERROR: process (pid: 59573) is no longer running 00:08:04.906 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59573) - No such process 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59543 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59543 ']' 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59543 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59543 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59543' 00:08:04.906 killing process with pid 59543 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59543 00:08:04.906 22:25:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59543 00:08:08.197 00:08:08.197 real 0m5.900s 00:08:08.197 user 0m15.681s 00:08:08.197 sys 0m0.739s 00:08:08.197 22:25:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.197 22:25:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.197 ************************************ 00:08:08.197 END TEST locking_overlapped_coremask 00:08:08.197 ************************************ 00:08:08.197 22:25:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:08.197 22:25:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.197 22:25:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.197 22:25:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.197 ************************************ 00:08:08.197 START TEST locking_overlapped_coremask_via_rpc 00:08:08.197 ************************************ 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59647 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59647 /var/tmp/spdk.sock 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59647 ']' 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.197 22:25:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.456 [2024-09-27 22:25:04.146505] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:08.456 [2024-09-27 22:25:04.146649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59647 ] 00:08:08.456 [2024-09-27 22:25:04.319044] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:08.456 [2024-09-27 22:25:04.319119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:08.758 [2024-09-27 22:25:04.554777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.758 [2024-09-27 22:25:04.554867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.758 [2024-09-27 22:25:04.554897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.137 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59666 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59666 /var/tmp/spdk2.sock 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59666 ']' 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.138 22:25:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.138 [2024-09-27 22:25:05.956228] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:10.138 [2024-09-27 22:25:05.956358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59666 ] 00:08:10.399 [2024-09-27 22:25:06.123998] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.399 [2024-09-27 22:25:06.124077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.968 [2024-09-27 22:25:06.612766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.968 [2024-09-27 22:25:06.612848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.968 [2024-09-27 22:25:06.612890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.499 [2024-09-27 22:25:09.203180] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59647 has claimed it. 00:08:13.499 request: 00:08:13.499 { 00:08:13.499 "method": "framework_enable_cpumask_locks", 00:08:13.499 "req_id": 1 00:08:13.499 } 00:08:13.499 Got JSON-RPC error response 00:08:13.499 response: 00:08:13.499 { 00:08:13.499 "code": -32603, 00:08:13.499 "message": "Failed to claim CPU core: 2" 00:08:13.499 } 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59647 /var/tmp/spdk.sock 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59647 ']' 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.499 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59666 /var/tmp/spdk2.sock 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59666 ']' 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.758 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:14.016 00:08:14.016 real 0m5.656s 00:08:14.016 user 0m1.324s 00:08:14.016 sys 0m0.243s 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.016 22:25:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.016 ************************************ 00:08:14.016 END TEST locking_overlapped_coremask_via_rpc 00:08:14.016 ************************************ 00:08:14.016 22:25:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:14.016 22:25:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59647 ]] 00:08:14.016 22:25:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59647 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59647 ']' 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59647 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59647 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59647' 00:08:14.016 killing process with pid 59647 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59647 00:08:14.016 22:25:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59647 00:08:17.322 22:25:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59666 ]] 00:08:17.322 22:25:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59666 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59666 ']' 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59666 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59666 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59666' 00:08:17.322 killing process with pid 59666 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59666 00:08:17.322 22:25:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59666 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59647 ]] 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59647 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59647 ']' 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59647 00:08:20.605 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59647) - No such process 00:08:20.605 Process with pid 59647 is not found 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59647 is not found' 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59666 ]] 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59666 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59666 ']' 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59666 00:08:20.605 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59666) - No such process 00:08:20.605 Process with pid 59666 is not found 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59666 is not found' 00:08:20.605 22:25:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:20.605 00:08:20.605 real 1m11.787s 00:08:20.605 user 1m57.330s 00:08:20.605 sys 0m9.372s 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.605 22:25:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.605 ************************************ 00:08:20.605 END TEST cpu_locks 00:08:20.605 ************************************ 00:08:20.864 00:08:20.864 real 1m47.094s 00:08:20.864 user 3m5.851s 00:08:20.864 sys 0m14.354s 00:08:20.864 22:25:16 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.864 22:25:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:20.864 ************************************ 00:08:20.864 END TEST event 00:08:20.864 ************************************ 00:08:20.864 22:25:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:20.864 22:25:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.864 22:25:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.864 22:25:16 -- common/autotest_common.sh@10 -- # set +x 00:08:20.864 ************************************ 00:08:20.864 START TEST thread 00:08:20.864 ************************************ 00:08:20.864 22:25:16 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:20.864 * Looking for test storage... 00:08:20.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:20.864 22:25:16 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:20.864 22:25:16 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:08:20.864 22:25:16 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:21.123 22:25:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.123 22:25:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.123 22:25:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.123 22:25:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.123 22:25:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.123 22:25:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.123 22:25:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.123 22:25:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.123 22:25:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.123 22:25:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.123 22:25:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.123 22:25:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:21.123 22:25:16 thread -- scripts/common.sh@345 -- # : 1 00:08:21.123 22:25:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.123 22:25:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.123 22:25:16 thread -- scripts/common.sh@365 -- # decimal 1 00:08:21.123 22:25:16 thread -- scripts/common.sh@353 -- # local d=1 00:08:21.123 22:25:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.123 22:25:16 thread -- scripts/common.sh@355 -- # echo 1 00:08:21.123 22:25:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.123 22:25:16 thread -- scripts/common.sh@366 -- # decimal 2 00:08:21.123 22:25:16 thread -- scripts/common.sh@353 -- # local d=2 00:08:21.123 22:25:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.123 22:25:16 thread -- scripts/common.sh@355 -- # echo 2 00:08:21.123 22:25:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.123 22:25:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.123 22:25:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.123 22:25:16 thread -- scripts/common.sh@368 -- # return 0 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:21.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.123 --rc genhtml_branch_coverage=1 00:08:21.123 --rc genhtml_function_coverage=1 00:08:21.123 --rc genhtml_legend=1 00:08:21.123 --rc geninfo_all_blocks=1 00:08:21.123 --rc geninfo_unexecuted_blocks=1 00:08:21.123 00:08:21.123 ' 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:21.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.123 --rc genhtml_branch_coverage=1 00:08:21.123 --rc genhtml_function_coverage=1 00:08:21.123 --rc genhtml_legend=1 00:08:21.123 --rc geninfo_all_blocks=1 00:08:21.123 --rc geninfo_unexecuted_blocks=1 00:08:21.123 00:08:21.123 ' 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:21.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.123 --rc genhtml_branch_coverage=1 00:08:21.123 --rc genhtml_function_coverage=1 00:08:21.123 --rc genhtml_legend=1 00:08:21.123 --rc geninfo_all_blocks=1 00:08:21.123 --rc geninfo_unexecuted_blocks=1 00:08:21.123 00:08:21.123 ' 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:21.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.123 --rc genhtml_branch_coverage=1 00:08:21.123 --rc genhtml_function_coverage=1 00:08:21.123 --rc genhtml_legend=1 00:08:21.123 --rc geninfo_all_blocks=1 00:08:21.123 --rc geninfo_unexecuted_blocks=1 00:08:21.123 00:08:21.123 ' 00:08:21.123 22:25:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.123 22:25:16 thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.123 ************************************ 00:08:21.123 START TEST thread_poller_perf 00:08:21.123 ************************************ 00:08:21.123 22:25:16 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:21.123 [2024-09-27 22:25:16.891240] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:21.123 [2024-09-27 22:25:16.891359] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59893 ] 00:08:21.382 [2024-09-27 22:25:17.053706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.641 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:21.641 [2024-09-27 22:25:17.333799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.019 ====================================== 00:08:23.019 busy:2502140388 (cyc) 00:08:23.019 total_run_count: 379000 00:08:23.019 tsc_hz: 2490000000 (cyc) 00:08:23.019 ====================================== 00:08:23.019 poller_cost: 6601 (cyc), 2651 (nsec) 00:08:23.019 00:08:23.019 real 0m1.896s 00:08:23.019 user 0m1.654s 00:08:23.019 sys 0m0.132s 00:08:23.019 22:25:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.019 22:25:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:23.019 ************************************ 00:08:23.019 END TEST thread_poller_perf 00:08:23.019 ************************************ 00:08:23.019 22:25:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:23.019 22:25:18 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:23.019 22:25:18 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.019 22:25:18 thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.019 ************************************ 00:08:23.019 START TEST thread_poller_perf 00:08:23.019 ************************************ 00:08:23.019 22:25:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:23.019 [2024-09-27 22:25:18.859512] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:23.019 [2024-09-27 22:25:18.859622] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:08:23.278 [2024-09-27 22:25:19.027471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.536 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:23.536 [2024-09-27 22:25:19.263524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.915 ====================================== 00:08:24.915 busy:2493886974 (cyc) 00:08:24.915 total_run_count: 5064000 00:08:24.915 tsc_hz: 2490000000 (cyc) 00:08:24.915 ====================================== 00:08:24.915 poller_cost: 492 (cyc), 197 (nsec) 00:08:24.915 00:08:24.915 real 0m1.862s 00:08:24.915 user 0m1.637s 00:08:24.915 sys 0m0.117s 00:08:24.915 22:25:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.915 22:25:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 ************************************ 00:08:24.915 END TEST thread_poller_perf 00:08:24.915 ************************************ 00:08:24.915 22:25:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:24.915 00:08:24.915 real 0m4.121s 00:08:24.915 user 0m3.450s 00:08:24.915 sys 0m0.461s 00:08:24.915 22:25:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.915 22:25:20 thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 ************************************ 00:08:24.915 END TEST thread 00:08:24.915 ************************************ 00:08:24.915 22:25:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:24.915 22:25:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:24.915 22:25:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.915 22:25:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.915 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:08:25.174 ************************************ 00:08:25.174 START TEST app_cmdline 00:08:25.174 ************************************ 00:08:25.174 22:25:20 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:25.174 * Looking for test storage... 00:08:25.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:25.174 22:25:20 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:25.174 22:25:20 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:08:25.174 22:25:20 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:25.174 22:25:20 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:25.174 22:25:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.175 22:25:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.175 22:25:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.175 --rc genhtml_branch_coverage=1 00:08:25.175 --rc genhtml_function_coverage=1 00:08:25.175 --rc genhtml_legend=1 00:08:25.175 --rc geninfo_all_blocks=1 00:08:25.175 --rc geninfo_unexecuted_blocks=1 00:08:25.175 00:08:25.175 ' 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.175 --rc genhtml_branch_coverage=1 00:08:25.175 --rc genhtml_function_coverage=1 00:08:25.175 --rc genhtml_legend=1 00:08:25.175 --rc geninfo_all_blocks=1 00:08:25.175 --rc geninfo_unexecuted_blocks=1 00:08:25.175 00:08:25.175 ' 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.175 --rc genhtml_branch_coverage=1 00:08:25.175 --rc genhtml_function_coverage=1 00:08:25.175 --rc genhtml_legend=1 00:08:25.175 --rc geninfo_all_blocks=1 00:08:25.175 --rc geninfo_unexecuted_blocks=1 00:08:25.175 00:08:25.175 ' 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.175 --rc genhtml_branch_coverage=1 00:08:25.175 --rc genhtml_function_coverage=1 00:08:25.175 --rc genhtml_legend=1 00:08:25.175 --rc geninfo_all_blocks=1 00:08:25.175 --rc geninfo_unexecuted_blocks=1 00:08:25.175 00:08:25.175 ' 00:08:25.175 22:25:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:25.175 22:25:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60020 00:08:25.175 22:25:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:25.175 22:25:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60020 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60020 ']' 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.175 22:25:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.434 [2024-09-27 22:25:21.104637] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:25.434 [2024-09-27 22:25:21.104752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60020 ] 00:08:25.434 [2024-09-27 22:25:21.273759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.693 [2024-09-27 22:25:21.501689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.116 22:25:22 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.116 22:25:22 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:27.116 { 00:08:27.116 "version": "SPDK v25.01-pre git sha1 a2e043c42", 00:08:27.116 "fields": { 00:08:27.116 "major": 25, 00:08:27.116 "minor": 1, 00:08:27.116 "patch": 0, 00:08:27.116 "suffix": "-pre", 00:08:27.116 "commit": "a2e043c42" 00:08:27.116 } 00:08:27.116 } 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:27.116 22:25:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:27.116 22:25:22 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.116 22:25:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:27.116 22:25:22 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.373 22:25:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:27.373 22:25:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:27.373 22:25:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.373 22:25:23 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.374 request: 00:08:27.374 { 00:08:27.374 "method": "env_dpdk_get_mem_stats", 00:08:27.374 "req_id": 1 00:08:27.374 } 00:08:27.374 Got JSON-RPC error response 00:08:27.374 response: 00:08:27.374 { 00:08:27.374 "code": -32601, 00:08:27.374 "message": "Method not found" 00:08:27.374 } 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.374 22:25:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60020 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60020 ']' 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60020 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.374 22:25:23 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60020 00:08:27.632 22:25:23 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.632 22:25:23 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.632 22:25:23 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60020' 00:08:27.632 killing process with pid 60020 00:08:27.632 22:25:23 app_cmdline -- common/autotest_common.sh@969 -- # kill 60020 00:08:27.632 22:25:23 app_cmdline -- common/autotest_common.sh@974 -- # wait 60020 00:08:30.926 00:08:30.926 real 0m5.703s 00:08:30.926 user 0m5.808s 00:08:30.926 sys 0m0.697s 00:08:30.926 22:25:26 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.926 22:25:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 ************************************ 00:08:30.926 END TEST app_cmdline 00:08:30.926 ************************************ 00:08:30.926 22:25:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:30.926 22:25:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.926 22:25:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.926 22:25:26 -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 ************************************ 00:08:30.926 START TEST version 00:08:30.926 ************************************ 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:30.926 * Looking for test storage... 00:08:30.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1681 -- # lcov --version 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:30.926 22:25:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.926 22:25:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.926 22:25:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.926 22:25:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.926 22:25:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.926 22:25:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.926 22:25:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.926 22:25:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.926 22:25:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.926 22:25:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.926 22:25:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.926 22:25:26 version -- scripts/common.sh@344 -- # case "$op" in 00:08:30.926 22:25:26 version -- scripts/common.sh@345 -- # : 1 00:08:30.926 22:25:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.926 22:25:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.926 22:25:26 version -- scripts/common.sh@365 -- # decimal 1 00:08:30.926 22:25:26 version -- scripts/common.sh@353 -- # local d=1 00:08:30.926 22:25:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.926 22:25:26 version -- scripts/common.sh@355 -- # echo 1 00:08:30.926 22:25:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.926 22:25:26 version -- scripts/common.sh@366 -- # decimal 2 00:08:30.926 22:25:26 version -- scripts/common.sh@353 -- # local d=2 00:08:30.926 22:25:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.926 22:25:26 version -- scripts/common.sh@355 -- # echo 2 00:08:30.926 22:25:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.926 22:25:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.926 22:25:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.926 22:25:26 version -- scripts/common.sh@368 -- # return 0 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.926 --rc genhtml_branch_coverage=1 00:08:30.926 --rc genhtml_function_coverage=1 00:08:30.926 --rc genhtml_legend=1 00:08:30.926 --rc geninfo_all_blocks=1 00:08:30.926 --rc geninfo_unexecuted_blocks=1 00:08:30.926 00:08:30.926 ' 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.926 --rc genhtml_branch_coverage=1 00:08:30.926 --rc genhtml_function_coverage=1 00:08:30.926 --rc genhtml_legend=1 00:08:30.926 --rc geninfo_all_blocks=1 00:08:30.926 --rc geninfo_unexecuted_blocks=1 00:08:30.926 00:08:30.926 ' 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.926 --rc genhtml_branch_coverage=1 00:08:30.926 --rc genhtml_function_coverage=1 00:08:30.926 --rc genhtml_legend=1 00:08:30.926 --rc geninfo_all_blocks=1 00:08:30.926 --rc geninfo_unexecuted_blocks=1 00:08:30.926 00:08:30.926 ' 00:08:30.926 22:25:26 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:30.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.926 --rc genhtml_branch_coverage=1 00:08:30.926 --rc genhtml_function_coverage=1 00:08:30.926 --rc genhtml_legend=1 00:08:30.926 --rc geninfo_all_blocks=1 00:08:30.926 --rc geninfo_unexecuted_blocks=1 00:08:30.926 00:08:30.926 ' 00:08:30.926 22:25:26 version -- app/version.sh@17 -- # get_header_version major 00:08:30.926 22:25:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:30.926 22:25:26 version -- app/version.sh@14 -- # cut -f2 00:08:30.926 22:25:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:30.926 22:25:26 version -- app/version.sh@17 -- # major=25 00:08:30.926 22:25:26 version -- app/version.sh@18 -- # get_header_version minor 00:08:30.926 22:25:26 version -- app/version.sh@14 -- # cut -f2 00:08:30.926 22:25:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:30.926 22:25:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:31.185 22:25:26 version -- app/version.sh@18 -- # minor=1 00:08:31.185 22:25:26 version -- app/version.sh@19 -- # get_header_version patch 00:08:31.185 22:25:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:31.185 22:25:26 version -- app/version.sh@14 -- # cut -f2 00:08:31.185 22:25:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:31.186 22:25:26 version -- app/version.sh@19 -- # patch=0 00:08:31.186 22:25:26 version -- app/version.sh@20 -- # get_header_version suffix 00:08:31.186 22:25:26 version -- app/version.sh@14 -- # tr -d '"' 00:08:31.186 22:25:26 version -- app/version.sh@14 -- # cut -f2 00:08:31.186 22:25:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:31.186 22:25:26 version -- app/version.sh@20 -- # suffix=-pre 00:08:31.186 22:25:26 version -- app/version.sh@22 -- # version=25.1 00:08:31.186 22:25:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:31.186 22:25:26 version -- app/version.sh@28 -- # version=25.1rc0 00:08:31.186 22:25:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:31.186 22:25:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:31.186 22:25:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:31.186 22:25:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:31.186 00:08:31.186 real 0m0.308s 00:08:31.186 user 0m0.188s 00:08:31.186 sys 0m0.166s 00:08:31.186 ************************************ 00:08:31.186 END TEST version 00:08:31.186 ************************************ 00:08:31.186 22:25:26 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.186 22:25:26 version -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 22:25:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:31.186 22:25:26 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:31.186 22:25:26 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:31.186 22:25:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.186 22:25:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.186 22:25:26 -- common/autotest_common.sh@10 -- # set +x 00:08:31.186 ************************************ 00:08:31.186 START TEST bdev_raid 00:08:31.186 ************************************ 00:08:31.186 22:25:26 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:31.186 * Looking for test storage... 00:08:31.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:31.186 22:25:27 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.446 22:25:27 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.446 --rc genhtml_branch_coverage=1 00:08:31.446 --rc genhtml_function_coverage=1 00:08:31.446 --rc genhtml_legend=1 00:08:31.446 --rc geninfo_all_blocks=1 00:08:31.446 --rc geninfo_unexecuted_blocks=1 00:08:31.446 00:08:31.446 ' 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.446 --rc genhtml_branch_coverage=1 00:08:31.446 --rc genhtml_function_coverage=1 00:08:31.446 --rc genhtml_legend=1 00:08:31.446 --rc geninfo_all_blocks=1 00:08:31.446 --rc geninfo_unexecuted_blocks=1 00:08:31.446 00:08:31.446 ' 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.446 --rc genhtml_branch_coverage=1 00:08:31.446 --rc genhtml_function_coverage=1 00:08:31.446 --rc genhtml_legend=1 00:08:31.446 --rc geninfo_all_blocks=1 00:08:31.446 --rc geninfo_unexecuted_blocks=1 00:08:31.446 00:08:31.446 ' 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.446 --rc genhtml_branch_coverage=1 00:08:31.446 --rc genhtml_function_coverage=1 00:08:31.446 --rc genhtml_legend=1 00:08:31.446 --rc geninfo_all_blocks=1 00:08:31.446 --rc geninfo_unexecuted_blocks=1 00:08:31.446 00:08:31.446 ' 00:08:31.446 22:25:27 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:31.446 22:25:27 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:31.446 22:25:27 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:31.446 22:25:27 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:31.446 22:25:27 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:31.446 22:25:27 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:31.446 22:25:27 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.446 22:25:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.446 ************************************ 00:08:31.446 START TEST raid1_resize_data_offset_test 00:08:31.446 ************************************ 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60224 00:08:31.446 Process raid pid: 60224 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60224' 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60224 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60224 ']' 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.446 22:25:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.446 [2024-09-27 22:25:27.287347] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:31.446 [2024-09-27 22:25:27.287537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.706 [2024-09-27 22:25:27.486894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.965 [2024-09-27 22:25:27.722212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.224 [2024-09-27 22:25:27.959697] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.224 [2024-09-27 22:25:27.959741] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.792 malloc0 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.792 malloc1 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.792 null0 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.792 [2024-09-27 22:25:28.621686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:32.792 [2024-09-27 22:25:28.623720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:32.792 [2024-09-27 22:25:28.623772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:32.792 [2024-09-27 22:25:28.623953] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:32.792 [2024-09-27 22:25:28.623967] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:32.792 [2024-09-27 22:25:28.624265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:32.792 [2024-09-27 22:25:28.624444] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:32.792 [2024-09-27 22:25:28.624459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:32.792 [2024-09-27 22:25:28.624605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.792 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.050 [2024-09-27 22:25:28.677599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.050 22:25:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 malloc2 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 [2024-09-27 22:25:29.319825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.616 [2024-09-27 22:25:29.339783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.616 [2024-09-27 22:25:29.341856] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60224 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60224 ']' 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60224 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60224 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.616 killing process with pid 60224 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60224' 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60224 00:08:33.616 [2024-09-27 22:25:29.435332] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.616 [2024-09-27 22:25:29.435478] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:33.616 22:25:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60224 00:08:33.616 [2024-09-27 22:25:29.435539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.616 [2024-09-27 22:25:29.435556] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:33.616 [2024-09-27 22:25:29.461777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.616 [2024-09-27 22:25:29.462144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.616 [2024-09-27 22:25:29.462168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:35.516 [2024-09-27 22:25:31.284786] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.417 22:25:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:37.417 00:08:37.417 real 0m6.081s 00:08:37.417 user 0m5.884s 00:08:37.417 sys 0m0.653s 00:08:37.417 22:25:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.417 22:25:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.417 ************************************ 00:08:37.417 END TEST raid1_resize_data_offset_test 00:08:37.417 ************************************ 00:08:37.676 22:25:33 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:37.676 22:25:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.676 22:25:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.676 22:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.676 ************************************ 00:08:37.676 START TEST raid0_resize_superblock_test 00:08:37.676 ************************************ 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60324 00:08:37.676 Process raid pid: 60324 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60324' 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60324 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60324 ']' 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.676 22:25:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.676 [2024-09-27 22:25:33.429866] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:37.676 [2024-09-27 22:25:33.430009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.934 [2024-09-27 22:25:33.595951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.191 [2024-09-27 22:25:33.824116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.191 [2024-09-27 22:25:34.062162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.191 [2024-09-27 22:25:34.062207] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.756 22:25:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.756 22:25:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:38.756 22:25:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:38.756 22:25:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.756 22:25:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.323 malloc0 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.323 [2024-09-27 22:25:35.153066] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:39.323 [2024-09-27 22:25:35.153146] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.323 [2024-09-27 22:25:35.153169] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:39.323 [2024-09-27 22:25:35.153184] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.323 [2024-09-27 22:25:35.155703] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.323 [2024-09-27 22:25:35.155756] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:39.323 pt0 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.323 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 d024d890-ed44-48b7-804e-d306f499a751 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 cf20cdff-a89f-46d1-a6b3-433974864a36 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 28baaa42-ce4d-4e5b-9e0b-c5cf49ea44f1 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 [2024-09-27 22:25:35.284182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev cf20cdff-a89f-46d1-a6b3-433974864a36 is claimed 00:08:39.583 [2024-09-27 22:25:35.284299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28baaa42-ce4d-4e5b-9e0b-c5cf49ea44f1 is claimed 00:08:39.583 [2024-09-27 22:25:35.284442] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:39.583 [2024-09-27 22:25:35.284462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:39.583 [2024-09-27 22:25:35.284751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:39.583 [2024-09-27 22:25:35.284957] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:39.583 [2024-09-27 22:25:35.284970] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:39.583 [2024-09-27 22:25:35.285175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 [2024-09-27 22:25:35.384322] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 [2024-09-27 22:25:35.424239] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:39.583 [2024-09-27 22:25:35.424276] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cf20cdff-a89f-46d1-a6b3-433974864a36' was resized: old size 131072, new size 204800 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.583 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.583 [2024-09-27 22:25:35.432154] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:39.584 [2024-09-27 22:25:35.432186] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '28baaa42-ce4d-4e5b-9e0b-c5cf49ea44f1' was resized: old size 131072, new size 204800 00:08:39.584 [2024-09-27 22:25:35.432218] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:39.584 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.584 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:39.584 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:39.584 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.584 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.844 [2024-09-27 22:25:35.540098] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.844 [2024-09-27 22:25:35.579798] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:39.844 [2024-09-27 22:25:35.579878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:39.844 [2024-09-27 22:25:35.579905] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.844 [2024-09-27 22:25:35.579922] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:39.844 [2024-09-27 22:25:35.580048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.844 [2024-09-27 22:25:35.580080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.844 [2024-09-27 22:25:35.580094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.844 [2024-09-27 22:25:35.587750] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:39.844 [2024-09-27 22:25:35.587813] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.844 [2024-09-27 22:25:35.587835] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:39.844 [2024-09-27 22:25:35.587849] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.844 [2024-09-27 22:25:35.590269] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.844 [2024-09-27 22:25:35.590312] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:39.844 [2024-09-27 22:25:35.591925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cf20cdff-a89f-46d1-a6b3-433974864a36 00:08:39.844 [2024-09-27 22:25:35.592020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev cf20cdff-a89f-46d1-a6b3-433974864a36 is claimed 00:08:39.844 [2024-09-27 22:25:35.592139] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 28baaa42-ce4d-4e5b-9e0b-c5cf49ea44f1 00:08:39.844 [2024-09-27 22:25:35.592160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 28baaa42-ce4d-4e5b-9e0b-c5cf49ea44f1 is claimed 00:08:39.844 [2024-09-27 22:25:35.592317] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 28baaa42-ce4d-4e5b-9e0b-c5cf49ea44f1 (2) smaller than existing raid bdev Raid (3) 00:08:39.844 [2024-09-27 22:25:35.592346] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cf20cdff-a89f-46d1-a6b3-433974864a36: File exists 00:08:39.844 [2024-09-27 22:25:35.592390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:39.844 [2024-09-27 22:25:35.592404] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:39.844 pt0 00:08:39.844 [2024-09-27 22:25:35.592669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:39.844 [2024-09-27 22:25:35.592819] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:39.844 [2024-09-27 22:25:35.592837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:39.844 [2024-09-27 22:25:35.592999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.844 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:39.845 [2024-09-27 22:25:35.612213] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60324 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60324 ']' 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60324 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60324 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.845 killing process with pid 60324 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60324' 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60324 00:08:39.845 [2024-09-27 22:25:35.687455] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.845 [2024-09-27 22:25:35.687537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.845 22:25:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60324 00:08:39.845 [2024-09-27 22:25:35.687583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.845 [2024-09-27 22:25:35.687593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:41.747 [2024-09-27 22:25:37.168004] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.650 22:25:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:43.650 00:08:43.650 real 0m5.847s 00:08:43.650 user 0m5.955s 00:08:43.650 sys 0m0.701s 00:08:43.650 22:25:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.650 22:25:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.650 ************************************ 00:08:43.650 END TEST raid0_resize_superblock_test 00:08:43.650 ************************************ 00:08:43.650 22:25:39 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:43.650 22:25:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.650 22:25:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.650 22:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.650 ************************************ 00:08:43.650 START TEST raid1_resize_superblock_test 00:08:43.650 ************************************ 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60438 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.650 Process raid pid: 60438 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60438' 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60438 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60438 ']' 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.650 22:25:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.650 [2024-09-27 22:25:39.355775] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:43.650 [2024-09-27 22:25:39.355993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.908 [2024-09-27 22:25:39.530733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.908 [2024-09-27 22:25:39.767469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.166 [2024-09-27 22:25:40.010769] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.166 [2024-09-27 22:25:40.010837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.735 22:25:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.735 22:25:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:44.735 22:25:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:44.735 22:25:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.735 22:25:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.302 malloc0 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.302 [2024-09-27 22:25:41.131403] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:45.302 [2024-09-27 22:25:41.131482] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.302 [2024-09-27 22:25:41.131506] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:45.302 [2024-09-27 22:25:41.131522] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.302 [2024-09-27 22:25:41.133911] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.302 [2024-09-27 22:25:41.133957] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:45.302 pt0 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.302 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 ad37b1e3-61f3-4d99-a5eb-0feda3ab1488 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 f22b1426-a4e4-44ae-9d39-65fcc74dce9d 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 4f3166b4-a020-408d-bf13-f05d3af1ad4a 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 [2024-09-27 22:25:41.262247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f22b1426-a4e4-44ae-9d39-65fcc74dce9d is claimed 00:08:45.564 [2024-09-27 22:25:41.262349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f3166b4-a020-408d-bf13-f05d3af1ad4a is claimed 00:08:45.564 [2024-09-27 22:25:41.262489] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:45.564 [2024-09-27 22:25:41.262509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:45.564 [2024-09-27 22:25:41.262770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:45.564 [2024-09-27 22:25:41.263072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:45.564 [2024-09-27 22:25:41.263097] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:45.564 [2024-09-27 22:25:41.263296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 [2024-09-27 22:25:41.362414] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 [2024-09-27 22:25:41.402356] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.564 [2024-09-27 22:25:41.402395] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f22b1426-a4e4-44ae-9d39-65fcc74dce9d' was resized: old size 131072, new size 204800 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.564 [2024-09-27 22:25:41.414412] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:45.564 [2024-09-27 22:25:41.414459] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4f3166b4-a020-408d-bf13-f05d3af1ad4a' was resized: old size 131072, new size 204800 00:08:45.564 [2024-09-27 22:25:41.414510] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:45.564 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:45.565 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.565 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 [2024-09-27 22:25:41.514155] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 [2024-09-27 22:25:41.549855] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:45.825 [2024-09-27 22:25:41.549935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:45.825 [2024-09-27 22:25:41.549986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:45.825 [2024-09-27 22:25:41.550137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.825 [2024-09-27 22:25:41.550319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.825 [2024-09-27 22:25:41.550385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.825 [2024-09-27 22:25:41.550401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 [2024-09-27 22:25:41.561807] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:45.825 [2024-09-27 22:25:41.561874] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.825 [2024-09-27 22:25:41.561898] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:45.825 [2024-09-27 22:25:41.561912] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.825 [2024-09-27 22:25:41.564322] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.825 [2024-09-27 22:25:41.564369] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:45.825 [2024-09-27 22:25:41.565966] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f22b1426-a4e4-44ae-9d39-65fcc74dce9d 00:08:45.825 [2024-09-27 22:25:41.566051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f22b1426-a4e4-44ae-9d39-65fcc74dce9d is claimed 00:08:45.825 [2024-09-27 22:25:41.566167] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4f3166b4-a020-408d-bf13-f05d3af1ad4a 00:08:45.825 [2024-09-27 22:25:41.566188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f3166b4-a020-408d-bf13-f05d3af1ad4a is claimed 00:08:45.825 [2024-09-27 22:25:41.566353] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4f3166b4-a020-408d-bf13-f05d3af1ad4a (2) smaller than existing raid bdev Raid (3) 00:08:45.825 [2024-09-27 22:25:41.566378] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f22b1426-a4e4-44ae-9d39-65fcc74dce9d: File exists 00:08:45.825 [2024-09-27 22:25:41.566417] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:45.825 [2024-09-27 22:25:41.566431] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:45.825 [2024-09-27 22:25:41.566692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:45.825 pt0 00:08:45.825 [2024-09-27 22:25:41.566871] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:45.825 [2024-09-27 22:25:41.566881] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:45.825 [2024-09-27 22:25:41.567045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.825 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.825 [2024-09-27 22:25:41.590254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60438 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60438 ']' 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60438 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60438 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.826 killing process with pid 60438 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60438' 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60438 00:08:45.826 [2024-09-27 22:25:41.667096] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.826 [2024-09-27 22:25:41.667191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.826 22:25:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60438 00:08:45.826 [2024-09-27 22:25:41.667246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.826 [2024-09-27 22:25:41.667257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:47.729 [2024-09-27 22:25:43.094801] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.632 22:25:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:49.632 00:08:49.632 real 0m5.822s 00:08:49.632 user 0m5.948s 00:08:49.632 sys 0m0.748s 00:08:49.632 22:25:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.632 22:25:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 ************************************ 00:08:49.632 END TEST raid1_resize_superblock_test 00:08:49.632 ************************************ 00:08:49.632 22:25:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:49.632 22:25:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:49.632 22:25:45 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:49.632 22:25:45 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:49.632 22:25:45 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:49.632 22:25:45 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:49.632 22:25:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.632 22:25:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.632 22:25:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 ************************************ 00:08:49.632 START TEST raid_function_test_raid0 00:08:49.632 ************************************ 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:49.632 Process raid pid: 60553 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60553 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60553' 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60553 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60553 ']' 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.632 22:25:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 [2024-09-27 22:25:45.261566] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:49.632 [2024-09-27 22:25:45.261914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.632 [2024-09-27 22:25:45.431175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.889 [2024-09-27 22:25:45.668346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.182 [2024-09-27 22:25:45.913014] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.182 [2024-09-27 22:25:45.913219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:50.747 Base_1 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:50.747 Base_2 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:50.747 [2024-09-27 22:25:46.481912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:50.747 [2024-09-27 22:25:46.484127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:50.747 [2024-09-27 22:25:46.484320] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:50.747 [2024-09-27 22:25:46.484416] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:50.747 [2024-09-27 22:25:46.484693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.747 [2024-09-27 22:25:46.484839] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:50.747 [2024-09-27 22:25:46.484850] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:50.747 [2024-09-27 22:25:46.485020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:50.747 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:51.005 [2024-09-27 22:25:46.725618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:51.005 /dev/nbd0 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.005 1+0 records in 00:08:51.005 1+0 records out 00:08:51.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416188 s, 9.8 MB/s 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:51.005 22:25:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.263 { 00:08:51.263 "nbd_device": "/dev/nbd0", 00:08:51.263 "bdev_name": "raid" 00:08:51.263 } 00:08:51.263 ]' 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.263 { 00:08:51.263 "nbd_device": "/dev/nbd0", 00:08:51.263 "bdev_name": "raid" 00:08:51.263 } 00:08:51.263 ]' 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:51.263 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:51.521 4096+0 records in 00:08:51.521 4096+0 records out 00:08:51.521 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0390897 s, 53.6 MB/s 00:08:51.521 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:51.521 4096+0 records in 00:08:51.521 4096+0 records out 00:08:51.521 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.218685 s, 9.6 MB/s 00:08:51.521 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:51.521 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:51.779 128+0 records in 00:08:51.779 128+0 records out 00:08:51.779 65536 bytes (66 kB, 64 KiB) copied, 0.00169103 s, 38.8 MB/s 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:51.779 2035+0 records in 00:08:51.779 2035+0 records out 00:08:51.779 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0188722 s, 55.2 MB/s 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:51.779 456+0 records in 00:08:51.779 456+0 records out 00:08:51.779 233472 bytes (233 kB, 228 KiB) copied, 0.00595566 s, 39.2 MB/s 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.779 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.038 [2024-09-27 22:25:47.743614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:52.038 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:52.297 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.297 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.297 22:25:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60553 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60553 ']' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60553 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60553 00:08:52.297 killing process with pid 60553 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60553' 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60553 00:08:52.297 22:25:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60553 00:08:52.297 [2024-09-27 22:25:48.066313] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.297 [2024-09-27 22:25:48.066419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.297 [2024-09-27 22:25:48.066465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.297 [2024-09-27 22:25:48.066479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:52.555 [2024-09-27 22:25:48.277890] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.454 ************************************ 00:08:54.454 END TEST raid_function_test_raid0 00:08:54.454 ************************************ 00:08:54.454 22:25:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:54.454 00:08:54.454 real 0m5.044s 00:08:54.454 user 0m5.541s 00:08:54.454 sys 0m1.162s 00:08:54.454 22:25:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.454 22:25:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:54.454 22:25:50 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:54.454 22:25:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.454 22:25:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.454 22:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.454 ************************************ 00:08:54.454 START TEST raid_function_test_concat 00:08:54.454 ************************************ 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60693 00:08:54.454 Process raid pid: 60693 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60693' 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60693 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60693 ']' 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.454 22:25:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:54.711 [2024-09-27 22:25:50.381526] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:54.711 [2024-09-27 22:25:50.381660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.711 [2024-09-27 22:25:50.554042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.991 [2024-09-27 22:25:50.795982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.249 [2024-09-27 22:25:51.041430] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.249 [2024-09-27 22:25:51.041465] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:55.815 Base_1 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:55.815 Base_2 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:55.815 [2024-09-27 22:25:51.631613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:55.815 [2024-09-27 22:25:51.633694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:55.815 [2024-09-27 22:25:51.633778] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:55.815 [2024-09-27 22:25:51.633792] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:55.815 [2024-09-27 22:25:51.634085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:55.815 [2024-09-27 22:25:51.634233] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:55.815 [2024-09-27 22:25:51.634244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:55.815 [2024-09-27 22:25:51.634406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:55.815 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:56.073 [2024-09-27 22:25:51.879301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:56.073 /dev/nbd0 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.073 1+0 records in 00:08:56.073 1+0 records out 00:08:56.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715507 s, 5.7 MB/s 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.073 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:56.331 22:25:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:56.331 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:56.331 22:25:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:56.331 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:56.331 { 00:08:56.331 "nbd_device": "/dev/nbd0", 00:08:56.331 "bdev_name": "raid" 00:08:56.331 } 00:08:56.331 ]' 00:08:56.331 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:56.331 { 00:08:56.331 "nbd_device": "/dev/nbd0", 00:08:56.331 "bdev_name": "raid" 00:08:56.331 } 00:08:56.331 ]' 00:08:56.331 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:56.589 4096+0 records in 00:08:56.589 4096+0 records out 00:08:56.589 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0288482 s, 72.7 MB/s 00:08:56.589 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:56.847 4096+0 records in 00:08:56.847 4096+0 records out 00:08:56.847 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.250001 s, 8.4 MB/s 00:08:56.847 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:56.847 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:56.847 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:56.847 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:56.847 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:56.847 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:56.848 128+0 records in 00:08:56.848 128+0 records out 00:08:56.848 65536 bytes (66 kB, 64 KiB) copied, 0.00168963 s, 38.8 MB/s 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:56.848 2035+0 records in 00:08:56.848 2035+0 records out 00:08:56.848 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0140822 s, 74.0 MB/s 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:56.848 456+0 records in 00:08:56.848 456+0 records out 00:08:56.848 233472 bytes (233 kB, 228 KiB) copied, 0.00369068 s, 63.3 MB/s 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.848 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.107 [2024-09-27 22:25:52.898890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:57.107 22:25:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60693 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60693 ']' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60693 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.365 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60693 00:08:57.623 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.623 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.623 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60693' 00:08:57.623 killing process with pid 60693 00:08:57.623 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60693 00:08:57.623 [2024-09-27 22:25:53.273724] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.623 22:25:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60693 00:08:57.623 [2024-09-27 22:25:53.273839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.623 [2024-09-27 22:25:53.273891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.623 [2024-09-27 22:25:53.273906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:57.623 [2024-09-27 22:25:53.483904] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.576 ************************************ 00:08:59.576 END TEST raid_function_test_concat 00:08:59.576 ************************************ 00:08:59.576 22:25:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:59.576 00:08:59.576 real 0m5.150s 00:08:59.576 user 0m5.637s 00:08:59.576 sys 0m1.209s 00:08:59.576 22:25:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.576 22:25:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:59.845 22:25:55 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:59.845 22:25:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.845 22:25:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.845 22:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.845 ************************************ 00:08:59.845 START TEST raid0_resize_test 00:08:59.845 ************************************ 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60832 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60832' 00:08:59.845 Process raid pid: 60832 00:08:59.845 22:25:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60832 00:08:59.846 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60832 ']' 00:08:59.846 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.846 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.846 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.846 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.846 22:25:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.846 [2024-09-27 22:25:55.601563] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:08:59.846 [2024-09-27 22:25:55.601698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.104 [2024-09-27 22:25:55.761429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.362 [2024-09-27 22:25:56.005672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.620 [2024-09-27 22:25:56.248197] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.620 [2024-09-27 22:25:56.248248] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.878 Base_1 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.878 Base_2 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.878 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.878 [2024-09-27 22:25:56.752493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:00.878 [2024-09-27 22:25:56.754615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:00.878 [2024-09-27 22:25:56.754684] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:00.878 [2024-09-27 22:25:56.754698] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:00.878 [2024-09-27 22:25:56.754969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:01.137 [2024-09-27 22:25:56.755117] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:01.137 [2024-09-27 22:25:56.755139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:01.137 [2024-09-27 22:25:56.755296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.137 [2024-09-27 22:25:56.760423] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:01.137 [2024-09-27 22:25:56.760456] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:01.137 true 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.137 [2024-09-27 22:25:56.776560] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.137 [2024-09-27 22:25:56.816375] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:01.137 [2024-09-27 22:25:56.816408] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:01.137 [2024-09-27 22:25:56.816445] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:01.137 true 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.137 [2024-09-27 22:25:56.832492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60832 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60832 ']' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60832 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60832 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.137 killing process with pid 60832 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60832' 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60832 00:09:01.137 [2024-09-27 22:25:56.898290] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.137 [2024-09-27 22:25:56.898380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.137 [2024-09-27 22:25:56.898429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.137 [2024-09-27 22:25:56.898441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:01.137 22:25:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60832 00:09:01.137 [2024-09-27 22:25:56.916472] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.041 22:25:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:03.041 00:09:03.041 real 0m3.381s 00:09:03.041 user 0m3.434s 00:09:03.041 sys 0m0.484s 00:09:03.041 22:25:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.041 22:25:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.041 ************************************ 00:09:03.041 END TEST raid0_resize_test 00:09:03.041 ************************************ 00:09:03.298 22:25:58 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:03.298 22:25:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.299 22:25:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.299 22:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.299 ************************************ 00:09:03.299 START TEST raid1_resize_test 00:09:03.299 ************************************ 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60899 00:09:03.299 Process raid pid: 60899 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60899' 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60899 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60899 ']' 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.299 22:25:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.299 [2024-09-27 22:25:59.062487] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:03.299 [2024-09-27 22:25:59.062618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.557 [2024-09-27 22:25:59.239419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.816 [2024-09-27 22:25:59.472435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.074 [2024-09-27 22:25:59.721659] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.074 [2024-09-27 22:25:59.721699] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.333 Base_1 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.333 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.597 Base_2 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.597 [2024-09-27 22:26:00.225516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:04.597 [2024-09-27 22:26:00.227734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:04.597 [2024-09-27 22:26:00.227929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:04.597 [2024-09-27 22:26:00.228029] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:04.597 [2024-09-27 22:26:00.228333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:04.597 [2024-09-27 22:26:00.228618] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:04.597 [2024-09-27 22:26:00.228727] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:04.597 [2024-09-27 22:26:00.228917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.597 [2024-09-27 22:26:00.233453] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:04.597 [2024-09-27 22:26:00.233592] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:04.597 true 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.597 [2024-09-27 22:26:00.249575] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.597 [2024-09-27 22:26:00.297407] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:04.597 [2024-09-27 22:26:00.297439] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:04.597 [2024-09-27 22:26:00.297479] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:04.597 true 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:04.597 [2024-09-27 22:26:00.309523] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60899 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60899 ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60899 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60899 00:09:04.597 killing process with pid 60899 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60899' 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60899 00:09:04.597 [2024-09-27 22:26:00.397562] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.597 [2024-09-27 22:26:00.397653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.597 22:26:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60899 00:09:04.597 [2024-09-27 22:26:00.398168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.597 [2024-09-27 22:26:00.398321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:04.597 [2024-09-27 22:26:00.416578] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.504 ************************************ 00:09:06.504 END TEST raid1_resize_test 00:09:06.504 ************************************ 00:09:06.504 22:26:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:06.504 00:09:06.504 real 0m3.384s 00:09:06.504 user 0m3.470s 00:09:06.504 sys 0m0.444s 00:09:06.504 22:26:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.504 22:26:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 22:26:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:06.765 22:26:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:06.765 22:26:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:06.765 22:26:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:06.765 22:26:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.765 22:26:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 ************************************ 00:09:06.765 START TEST raid_state_function_test 00:09:06.765 ************************************ 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:06.765 Process raid pid: 60967 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60967 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60967' 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60967 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60967 ']' 00:09:06.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.765 22:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.765 [2024-09-27 22:26:02.530505] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:06.765 [2024-09-27 22:26:02.530636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.024 [2024-09-27 22:26:02.701753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.284 [2024-09-27 22:26:02.936690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.543 [2024-09-27 22:26:03.184037] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.543 [2024-09-27 22:26:03.184074] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.804 [2024-09-27 22:26:03.652941] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.804 [2024-09-27 22:26:03.653157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.804 [2024-09-27 22:26:03.653311] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.804 [2024-09-27 22:26:03.653362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.804 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.064 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.064 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.064 "name": "Existed_Raid", 00:09:08.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.064 "strip_size_kb": 64, 00:09:08.064 "state": "configuring", 00:09:08.064 "raid_level": "raid0", 00:09:08.064 "superblock": false, 00:09:08.064 "num_base_bdevs": 2, 00:09:08.064 "num_base_bdevs_discovered": 0, 00:09:08.064 "num_base_bdevs_operational": 2, 00:09:08.064 "base_bdevs_list": [ 00:09:08.064 { 00:09:08.064 "name": "BaseBdev1", 00:09:08.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.064 "is_configured": false, 00:09:08.064 "data_offset": 0, 00:09:08.064 "data_size": 0 00:09:08.064 }, 00:09:08.064 { 00:09:08.064 "name": "BaseBdev2", 00:09:08.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.064 "is_configured": false, 00:09:08.064 "data_offset": 0, 00:09:08.064 "data_size": 0 00:09:08.064 } 00:09:08.064 ] 00:09:08.064 }' 00:09:08.064 22:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.064 22:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 [2024-09-27 22:26:04.060291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.323 [2024-09-27 22:26:04.060465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 [2024-09-27 22:26:04.072282] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.323 [2024-09-27 22:26:04.072441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.323 [2024-09-27 22:26:04.072552] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.323 [2024-09-27 22:26:04.072603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.323 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 [2024-09-27 22:26:04.127206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.323 BaseBdev1 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.324 [ 00:09:08.324 { 00:09:08.324 "name": "BaseBdev1", 00:09:08.324 "aliases": [ 00:09:08.324 "6d3f18c4-9997-4e14-928d-c9cab13ad053" 00:09:08.324 ], 00:09:08.324 "product_name": "Malloc disk", 00:09:08.324 "block_size": 512, 00:09:08.324 "num_blocks": 65536, 00:09:08.324 "uuid": "6d3f18c4-9997-4e14-928d-c9cab13ad053", 00:09:08.324 "assigned_rate_limits": { 00:09:08.324 "rw_ios_per_sec": 0, 00:09:08.324 "rw_mbytes_per_sec": 0, 00:09:08.324 "r_mbytes_per_sec": 0, 00:09:08.324 "w_mbytes_per_sec": 0 00:09:08.324 }, 00:09:08.324 "claimed": true, 00:09:08.324 "claim_type": "exclusive_write", 00:09:08.324 "zoned": false, 00:09:08.324 "supported_io_types": { 00:09:08.324 "read": true, 00:09:08.324 "write": true, 00:09:08.324 "unmap": true, 00:09:08.324 "flush": true, 00:09:08.324 "reset": true, 00:09:08.324 "nvme_admin": false, 00:09:08.324 "nvme_io": false, 00:09:08.324 "nvme_io_md": false, 00:09:08.324 "write_zeroes": true, 00:09:08.324 "zcopy": true, 00:09:08.324 "get_zone_info": false, 00:09:08.324 "zone_management": false, 00:09:08.324 "zone_append": false, 00:09:08.324 "compare": false, 00:09:08.324 "compare_and_write": false, 00:09:08.324 "abort": true, 00:09:08.324 "seek_hole": false, 00:09:08.324 "seek_data": false, 00:09:08.324 "copy": true, 00:09:08.324 "nvme_iov_md": false 00:09:08.324 }, 00:09:08.324 "memory_domains": [ 00:09:08.324 { 00:09:08.324 "dma_device_id": "system", 00:09:08.324 "dma_device_type": 1 00:09:08.324 }, 00:09:08.324 { 00:09:08.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.324 "dma_device_type": 2 00:09:08.324 } 00:09:08.324 ], 00:09:08.324 "driver_specific": {} 00:09:08.324 } 00:09:08.324 ] 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.324 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.583 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.583 "name": "Existed_Raid", 00:09:08.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.583 "strip_size_kb": 64, 00:09:08.583 "state": "configuring", 00:09:08.583 "raid_level": "raid0", 00:09:08.583 "superblock": false, 00:09:08.583 "num_base_bdevs": 2, 00:09:08.583 "num_base_bdevs_discovered": 1, 00:09:08.583 "num_base_bdevs_operational": 2, 00:09:08.583 "base_bdevs_list": [ 00:09:08.583 { 00:09:08.583 "name": "BaseBdev1", 00:09:08.583 "uuid": "6d3f18c4-9997-4e14-928d-c9cab13ad053", 00:09:08.583 "is_configured": true, 00:09:08.583 "data_offset": 0, 00:09:08.583 "data_size": 65536 00:09:08.583 }, 00:09:08.583 { 00:09:08.583 "name": "BaseBdev2", 00:09:08.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.583 "is_configured": false, 00:09:08.583 "data_offset": 0, 00:09:08.583 "data_size": 0 00:09:08.583 } 00:09:08.583 ] 00:09:08.583 }' 00:09:08.583 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.583 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.843 [2024-09-27 22:26:04.575018] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.843 [2024-09-27 22:26:04.575199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.843 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.843 [2024-09-27 22:26:04.587066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.844 [2024-09-27 22:26:04.589313] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.844 [2024-09-27 22:26:04.589464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.844 "name": "Existed_Raid", 00:09:08.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.844 "strip_size_kb": 64, 00:09:08.844 "state": "configuring", 00:09:08.844 "raid_level": "raid0", 00:09:08.844 "superblock": false, 00:09:08.844 "num_base_bdevs": 2, 00:09:08.844 "num_base_bdevs_discovered": 1, 00:09:08.844 "num_base_bdevs_operational": 2, 00:09:08.844 "base_bdevs_list": [ 00:09:08.844 { 00:09:08.844 "name": "BaseBdev1", 00:09:08.844 "uuid": "6d3f18c4-9997-4e14-928d-c9cab13ad053", 00:09:08.844 "is_configured": true, 00:09:08.844 "data_offset": 0, 00:09:08.844 "data_size": 65536 00:09:08.844 }, 00:09:08.844 { 00:09:08.844 "name": "BaseBdev2", 00:09:08.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.844 "is_configured": false, 00:09:08.844 "data_offset": 0, 00:09:08.844 "data_size": 0 00:09:08.844 } 00:09:08.844 ] 00:09:08.844 }' 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.844 22:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.413 [2024-09-27 22:26:05.075233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.413 [2024-09-27 22:26:05.075282] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.413 [2024-09-27 22:26:05.075293] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:09.413 [2024-09-27 22:26:05.075572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.413 [2024-09-27 22:26:05.075719] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.413 [2024-09-27 22:26:05.075738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:09.413 [2024-09-27 22:26:05.076004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.413 BaseBdev2 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.413 [ 00:09:09.413 { 00:09:09.413 "name": "BaseBdev2", 00:09:09.413 "aliases": [ 00:09:09.413 "49f6ed56-1cc8-48e3-b58b-b95ee189cdcb" 00:09:09.413 ], 00:09:09.413 "product_name": "Malloc disk", 00:09:09.413 "block_size": 512, 00:09:09.413 "num_blocks": 65536, 00:09:09.413 "uuid": "49f6ed56-1cc8-48e3-b58b-b95ee189cdcb", 00:09:09.413 "assigned_rate_limits": { 00:09:09.413 "rw_ios_per_sec": 0, 00:09:09.413 "rw_mbytes_per_sec": 0, 00:09:09.413 "r_mbytes_per_sec": 0, 00:09:09.413 "w_mbytes_per_sec": 0 00:09:09.413 }, 00:09:09.413 "claimed": true, 00:09:09.413 "claim_type": "exclusive_write", 00:09:09.413 "zoned": false, 00:09:09.413 "supported_io_types": { 00:09:09.413 "read": true, 00:09:09.413 "write": true, 00:09:09.413 "unmap": true, 00:09:09.413 "flush": true, 00:09:09.413 "reset": true, 00:09:09.413 "nvme_admin": false, 00:09:09.413 "nvme_io": false, 00:09:09.413 "nvme_io_md": false, 00:09:09.413 "write_zeroes": true, 00:09:09.413 "zcopy": true, 00:09:09.413 "get_zone_info": false, 00:09:09.413 "zone_management": false, 00:09:09.413 "zone_append": false, 00:09:09.413 "compare": false, 00:09:09.413 "compare_and_write": false, 00:09:09.413 "abort": true, 00:09:09.413 "seek_hole": false, 00:09:09.413 "seek_data": false, 00:09:09.413 "copy": true, 00:09:09.413 "nvme_iov_md": false 00:09:09.413 }, 00:09:09.413 "memory_domains": [ 00:09:09.413 { 00:09:09.413 "dma_device_id": "system", 00:09:09.413 "dma_device_type": 1 00:09:09.413 }, 00:09:09.413 { 00:09:09.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.413 "dma_device_type": 2 00:09:09.413 } 00:09:09.413 ], 00:09:09.413 "driver_specific": {} 00:09:09.413 } 00:09:09.413 ] 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.413 "name": "Existed_Raid", 00:09:09.413 "uuid": "a601cdee-589e-4cd6-bf94-418edf38ac1a", 00:09:09.413 "strip_size_kb": 64, 00:09:09.413 "state": "online", 00:09:09.413 "raid_level": "raid0", 00:09:09.413 "superblock": false, 00:09:09.413 "num_base_bdevs": 2, 00:09:09.413 "num_base_bdevs_discovered": 2, 00:09:09.413 "num_base_bdevs_operational": 2, 00:09:09.413 "base_bdevs_list": [ 00:09:09.413 { 00:09:09.413 "name": "BaseBdev1", 00:09:09.413 "uuid": "6d3f18c4-9997-4e14-928d-c9cab13ad053", 00:09:09.413 "is_configured": true, 00:09:09.413 "data_offset": 0, 00:09:09.413 "data_size": 65536 00:09:09.413 }, 00:09:09.413 { 00:09:09.413 "name": "BaseBdev2", 00:09:09.413 "uuid": "49f6ed56-1cc8-48e3-b58b-b95ee189cdcb", 00:09:09.413 "is_configured": true, 00:09:09.413 "data_offset": 0, 00:09:09.413 "data_size": 65536 00:09:09.413 } 00:09:09.413 ] 00:09:09.413 }' 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.413 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.673 [2024-09-27 22:26:05.511370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.673 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.673 "name": "Existed_Raid", 00:09:09.673 "aliases": [ 00:09:09.673 "a601cdee-589e-4cd6-bf94-418edf38ac1a" 00:09:09.673 ], 00:09:09.673 "product_name": "Raid Volume", 00:09:09.673 "block_size": 512, 00:09:09.673 "num_blocks": 131072, 00:09:09.673 "uuid": "a601cdee-589e-4cd6-bf94-418edf38ac1a", 00:09:09.673 "assigned_rate_limits": { 00:09:09.673 "rw_ios_per_sec": 0, 00:09:09.673 "rw_mbytes_per_sec": 0, 00:09:09.673 "r_mbytes_per_sec": 0, 00:09:09.673 "w_mbytes_per_sec": 0 00:09:09.673 }, 00:09:09.673 "claimed": false, 00:09:09.673 "zoned": false, 00:09:09.673 "supported_io_types": { 00:09:09.673 "read": true, 00:09:09.673 "write": true, 00:09:09.673 "unmap": true, 00:09:09.673 "flush": true, 00:09:09.673 "reset": true, 00:09:09.673 "nvme_admin": false, 00:09:09.673 "nvme_io": false, 00:09:09.673 "nvme_io_md": false, 00:09:09.673 "write_zeroes": true, 00:09:09.673 "zcopy": false, 00:09:09.673 "get_zone_info": false, 00:09:09.673 "zone_management": false, 00:09:09.673 "zone_append": false, 00:09:09.673 "compare": false, 00:09:09.673 "compare_and_write": false, 00:09:09.673 "abort": false, 00:09:09.673 "seek_hole": false, 00:09:09.673 "seek_data": false, 00:09:09.673 "copy": false, 00:09:09.673 "nvme_iov_md": false 00:09:09.673 }, 00:09:09.673 "memory_domains": [ 00:09:09.674 { 00:09:09.674 "dma_device_id": "system", 00:09:09.674 "dma_device_type": 1 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.674 "dma_device_type": 2 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "dma_device_id": "system", 00:09:09.674 "dma_device_type": 1 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.674 "dma_device_type": 2 00:09:09.674 } 00:09:09.674 ], 00:09:09.674 "driver_specific": { 00:09:09.674 "raid": { 00:09:09.674 "uuid": "a601cdee-589e-4cd6-bf94-418edf38ac1a", 00:09:09.674 "strip_size_kb": 64, 00:09:09.674 "state": "online", 00:09:09.674 "raid_level": "raid0", 00:09:09.674 "superblock": false, 00:09:09.674 "num_base_bdevs": 2, 00:09:09.674 "num_base_bdevs_discovered": 2, 00:09:09.674 "num_base_bdevs_operational": 2, 00:09:09.674 "base_bdevs_list": [ 00:09:09.674 { 00:09:09.674 "name": "BaseBdev1", 00:09:09.674 "uuid": "6d3f18c4-9997-4e14-928d-c9cab13ad053", 00:09:09.674 "is_configured": true, 00:09:09.674 "data_offset": 0, 00:09:09.674 "data_size": 65536 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "name": "BaseBdev2", 00:09:09.674 "uuid": "49f6ed56-1cc8-48e3-b58b-b95ee189cdcb", 00:09:09.674 "is_configured": true, 00:09:09.674 "data_offset": 0, 00:09:09.674 "data_size": 65536 00:09:09.674 } 00:09:09.674 ] 00:09:09.674 } 00:09:09.674 } 00:09:09.674 }' 00:09:09.674 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.933 BaseBdev2' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.933 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.933 [2024-09-27 22:26:05.739074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.933 [2024-09-27 22:26:05.739229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.933 [2024-09-27 22:26:05.739361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.193 "name": "Existed_Raid", 00:09:10.193 "uuid": "a601cdee-589e-4cd6-bf94-418edf38ac1a", 00:09:10.193 "strip_size_kb": 64, 00:09:10.193 "state": "offline", 00:09:10.193 "raid_level": "raid0", 00:09:10.193 "superblock": false, 00:09:10.193 "num_base_bdevs": 2, 00:09:10.193 "num_base_bdevs_discovered": 1, 00:09:10.193 "num_base_bdevs_operational": 1, 00:09:10.193 "base_bdevs_list": [ 00:09:10.193 { 00:09:10.193 "name": null, 00:09:10.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.193 "is_configured": false, 00:09:10.193 "data_offset": 0, 00:09:10.193 "data_size": 65536 00:09:10.193 }, 00:09:10.193 { 00:09:10.193 "name": "BaseBdev2", 00:09:10.193 "uuid": "49f6ed56-1cc8-48e3-b58b-b95ee189cdcb", 00:09:10.193 "is_configured": true, 00:09:10.193 "data_offset": 0, 00:09:10.193 "data_size": 65536 00:09:10.193 } 00:09:10.193 ] 00:09:10.193 }' 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.193 22:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.452 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.452 [2024-09-27 22:26:06.297929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.452 [2024-09-27 22:26:06.298133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60967 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60967 ']' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60967 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60967 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.711 killing process with pid 60967 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60967' 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60967 00:09:10.711 [2024-09-27 22:26:06.487856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.711 22:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60967 00:09:10.711 [2024-09-27 22:26:06.505396] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.342 00:09:13.342 real 0m6.180s 00:09:13.342 user 0m8.158s 00:09:13.342 sys 0m0.989s 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.342 ************************************ 00:09:13.342 END TEST raid_state_function_test 00:09:13.342 ************************************ 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.342 22:26:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:13.342 22:26:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:13.342 22:26:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.342 22:26:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.342 ************************************ 00:09:13.342 START TEST raid_state_function_test_sb 00:09:13.342 ************************************ 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:13.342 Process raid pid: 61231 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61231 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61231' 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61231 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61231 ']' 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.342 22:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.342 [2024-09-27 22:26:08.778924] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:13.342 [2024-09-27 22:26:08.779329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.342 [2024-09-27 22:26:08.953810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.601 [2024-09-27 22:26:09.245693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.859 [2024-09-27 22:26:09.508720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.859 [2024-09-27 22:26:09.508780] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 [2024-09-27 22:26:10.064607] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.425 [2024-09-27 22:26:10.064894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.425 [2024-09-27 22:26:10.065011] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.425 [2024-09-27 22:26:10.065061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.425 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.426 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.426 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.426 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.426 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.426 "name": "Existed_Raid", 00:09:14.426 "uuid": "a3f0216e-1cfb-41b0-8d17-41771a722b91", 00:09:14.426 "strip_size_kb": 64, 00:09:14.426 "state": "configuring", 00:09:14.426 "raid_level": "raid0", 00:09:14.426 "superblock": true, 00:09:14.426 "num_base_bdevs": 2, 00:09:14.426 "num_base_bdevs_discovered": 0, 00:09:14.426 "num_base_bdevs_operational": 2, 00:09:14.426 "base_bdevs_list": [ 00:09:14.426 { 00:09:14.426 "name": "BaseBdev1", 00:09:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.426 "is_configured": false, 00:09:14.426 "data_offset": 0, 00:09:14.426 "data_size": 0 00:09:14.426 }, 00:09:14.426 { 00:09:14.426 "name": "BaseBdev2", 00:09:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.426 "is_configured": false, 00:09:14.426 "data_offset": 0, 00:09:14.426 "data_size": 0 00:09:14.426 } 00:09:14.426 ] 00:09:14.426 }' 00:09:14.426 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.426 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.683 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.683 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.683 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.683 [2024-09-27 22:26:10.523853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.683 [2024-09-27 22:26:10.523903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.683 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.684 [2024-09-27 22:26:10.535844] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.684 [2024-09-27 22:26:10.535899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.684 [2024-09-27 22:26:10.535911] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.684 [2024-09-27 22:26:10.535928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.684 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.943 [2024-09-27 22:26:10.589079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.943 BaseBdev1 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.943 [ 00:09:14.943 { 00:09:14.943 "name": "BaseBdev1", 00:09:14.943 "aliases": [ 00:09:14.943 "08006883-23fc-4374-b48f-5f2ffd7d60c7" 00:09:14.943 ], 00:09:14.943 "product_name": "Malloc disk", 00:09:14.943 "block_size": 512, 00:09:14.943 "num_blocks": 65536, 00:09:14.943 "uuid": "08006883-23fc-4374-b48f-5f2ffd7d60c7", 00:09:14.943 "assigned_rate_limits": { 00:09:14.943 "rw_ios_per_sec": 0, 00:09:14.943 "rw_mbytes_per_sec": 0, 00:09:14.943 "r_mbytes_per_sec": 0, 00:09:14.943 "w_mbytes_per_sec": 0 00:09:14.943 }, 00:09:14.943 "claimed": true, 00:09:14.943 "claim_type": "exclusive_write", 00:09:14.943 "zoned": false, 00:09:14.943 "supported_io_types": { 00:09:14.943 "read": true, 00:09:14.943 "write": true, 00:09:14.943 "unmap": true, 00:09:14.943 "flush": true, 00:09:14.943 "reset": true, 00:09:14.943 "nvme_admin": false, 00:09:14.943 "nvme_io": false, 00:09:14.943 "nvme_io_md": false, 00:09:14.943 "write_zeroes": true, 00:09:14.943 "zcopy": true, 00:09:14.943 "get_zone_info": false, 00:09:14.943 "zone_management": false, 00:09:14.943 "zone_append": false, 00:09:14.943 "compare": false, 00:09:14.943 "compare_and_write": false, 00:09:14.943 "abort": true, 00:09:14.943 "seek_hole": false, 00:09:14.943 "seek_data": false, 00:09:14.943 "copy": true, 00:09:14.943 "nvme_iov_md": false 00:09:14.943 }, 00:09:14.943 "memory_domains": [ 00:09:14.943 { 00:09:14.943 "dma_device_id": "system", 00:09:14.943 "dma_device_type": 1 00:09:14.943 }, 00:09:14.943 { 00:09:14.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.943 "dma_device_type": 2 00:09:14.943 } 00:09:14.943 ], 00:09:14.943 "driver_specific": {} 00:09:14.943 } 00:09:14.943 ] 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.943 "name": "Existed_Raid", 00:09:14.943 "uuid": "03092caa-a957-4dcb-b425-f0ae007e7a75", 00:09:14.943 "strip_size_kb": 64, 00:09:14.943 "state": "configuring", 00:09:14.943 "raid_level": "raid0", 00:09:14.943 "superblock": true, 00:09:14.943 "num_base_bdevs": 2, 00:09:14.943 "num_base_bdevs_discovered": 1, 00:09:14.943 "num_base_bdevs_operational": 2, 00:09:14.943 "base_bdevs_list": [ 00:09:14.943 { 00:09:14.943 "name": "BaseBdev1", 00:09:14.943 "uuid": "08006883-23fc-4374-b48f-5f2ffd7d60c7", 00:09:14.943 "is_configured": true, 00:09:14.943 "data_offset": 2048, 00:09:14.943 "data_size": 63488 00:09:14.943 }, 00:09:14.943 { 00:09:14.943 "name": "BaseBdev2", 00:09:14.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.943 "is_configured": false, 00:09:14.943 "data_offset": 0, 00:09:14.943 "data_size": 0 00:09:14.943 } 00:09:14.943 ] 00:09:14.943 }' 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.943 22:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.203 [2024-09-27 22:26:11.008553] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.203 [2024-09-27 22:26:11.008741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.203 [2024-09-27 22:26:11.020588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.203 [2024-09-27 22:26:11.022968] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.203 [2024-09-27 22:26:11.023029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.203 "name": "Existed_Raid", 00:09:15.203 "uuid": "904ef6da-01ea-4440-b9b0-361bc983aef0", 00:09:15.203 "strip_size_kb": 64, 00:09:15.203 "state": "configuring", 00:09:15.203 "raid_level": "raid0", 00:09:15.203 "superblock": true, 00:09:15.203 "num_base_bdevs": 2, 00:09:15.203 "num_base_bdevs_discovered": 1, 00:09:15.203 "num_base_bdevs_operational": 2, 00:09:15.203 "base_bdevs_list": [ 00:09:15.203 { 00:09:15.203 "name": "BaseBdev1", 00:09:15.203 "uuid": "08006883-23fc-4374-b48f-5f2ffd7d60c7", 00:09:15.203 "is_configured": true, 00:09:15.203 "data_offset": 2048, 00:09:15.203 "data_size": 63488 00:09:15.203 }, 00:09:15.203 { 00:09:15.203 "name": "BaseBdev2", 00:09:15.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.203 "is_configured": false, 00:09:15.203 "data_offset": 0, 00:09:15.203 "data_size": 0 00:09:15.203 } 00:09:15.203 ] 00:09:15.203 }' 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.203 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.771 [2024-09-27 22:26:11.478953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.771 [2024-09-27 22:26:11.479239] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.771 [2024-09-27 22:26:11.479256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:15.771 [2024-09-27 22:26:11.479595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:15.771 [2024-09-27 22:26:11.479765] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.771 [2024-09-27 22:26:11.479780] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.771 BaseBdev2 00:09:15.771 [2024-09-27 22:26:11.479931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.771 [ 00:09:15.771 { 00:09:15.771 "name": "BaseBdev2", 00:09:15.771 "aliases": [ 00:09:15.771 "0fd61b7d-9370-42b4-8efa-ce70ed3a47b9" 00:09:15.771 ], 00:09:15.771 "product_name": "Malloc disk", 00:09:15.771 "block_size": 512, 00:09:15.771 "num_blocks": 65536, 00:09:15.771 "uuid": "0fd61b7d-9370-42b4-8efa-ce70ed3a47b9", 00:09:15.771 "assigned_rate_limits": { 00:09:15.771 "rw_ios_per_sec": 0, 00:09:15.771 "rw_mbytes_per_sec": 0, 00:09:15.771 "r_mbytes_per_sec": 0, 00:09:15.771 "w_mbytes_per_sec": 0 00:09:15.771 }, 00:09:15.771 "claimed": true, 00:09:15.771 "claim_type": "exclusive_write", 00:09:15.771 "zoned": false, 00:09:15.771 "supported_io_types": { 00:09:15.771 "read": true, 00:09:15.771 "write": true, 00:09:15.771 "unmap": true, 00:09:15.771 "flush": true, 00:09:15.771 "reset": true, 00:09:15.771 "nvme_admin": false, 00:09:15.771 "nvme_io": false, 00:09:15.771 "nvme_io_md": false, 00:09:15.771 "write_zeroes": true, 00:09:15.771 "zcopy": true, 00:09:15.771 "get_zone_info": false, 00:09:15.771 "zone_management": false, 00:09:15.771 "zone_append": false, 00:09:15.771 "compare": false, 00:09:15.771 "compare_and_write": false, 00:09:15.771 "abort": true, 00:09:15.771 "seek_hole": false, 00:09:15.771 "seek_data": false, 00:09:15.771 "copy": true, 00:09:15.771 "nvme_iov_md": false 00:09:15.771 }, 00:09:15.771 "memory_domains": [ 00:09:15.771 { 00:09:15.771 "dma_device_id": "system", 00:09:15.771 "dma_device_type": 1 00:09:15.771 }, 00:09:15.771 { 00:09:15.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.771 "dma_device_type": 2 00:09:15.771 } 00:09:15.771 ], 00:09:15.771 "driver_specific": {} 00:09:15.771 } 00:09:15.771 ] 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.771 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.772 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.772 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.772 "name": "Existed_Raid", 00:09:15.772 "uuid": "904ef6da-01ea-4440-b9b0-361bc983aef0", 00:09:15.772 "strip_size_kb": 64, 00:09:15.772 "state": "online", 00:09:15.772 "raid_level": "raid0", 00:09:15.772 "superblock": true, 00:09:15.772 "num_base_bdevs": 2, 00:09:15.772 "num_base_bdevs_discovered": 2, 00:09:15.772 "num_base_bdevs_operational": 2, 00:09:15.772 "base_bdevs_list": [ 00:09:15.772 { 00:09:15.772 "name": "BaseBdev1", 00:09:15.772 "uuid": "08006883-23fc-4374-b48f-5f2ffd7d60c7", 00:09:15.772 "is_configured": true, 00:09:15.772 "data_offset": 2048, 00:09:15.772 "data_size": 63488 00:09:15.772 }, 00:09:15.772 { 00:09:15.772 "name": "BaseBdev2", 00:09:15.772 "uuid": "0fd61b7d-9370-42b4-8efa-ce70ed3a47b9", 00:09:15.772 "is_configured": true, 00:09:15.772 "data_offset": 2048, 00:09:15.772 "data_size": 63488 00:09:15.772 } 00:09:15.772 ] 00:09:15.772 }' 00:09:15.772 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.772 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 [2024-09-27 22:26:11.954599] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.340 "name": "Existed_Raid", 00:09:16.340 "aliases": [ 00:09:16.340 "904ef6da-01ea-4440-b9b0-361bc983aef0" 00:09:16.340 ], 00:09:16.340 "product_name": "Raid Volume", 00:09:16.340 "block_size": 512, 00:09:16.340 "num_blocks": 126976, 00:09:16.340 "uuid": "904ef6da-01ea-4440-b9b0-361bc983aef0", 00:09:16.340 "assigned_rate_limits": { 00:09:16.340 "rw_ios_per_sec": 0, 00:09:16.340 "rw_mbytes_per_sec": 0, 00:09:16.340 "r_mbytes_per_sec": 0, 00:09:16.340 "w_mbytes_per_sec": 0 00:09:16.340 }, 00:09:16.340 "claimed": false, 00:09:16.340 "zoned": false, 00:09:16.340 "supported_io_types": { 00:09:16.340 "read": true, 00:09:16.341 "write": true, 00:09:16.341 "unmap": true, 00:09:16.341 "flush": true, 00:09:16.341 "reset": true, 00:09:16.341 "nvme_admin": false, 00:09:16.341 "nvme_io": false, 00:09:16.341 "nvme_io_md": false, 00:09:16.341 "write_zeroes": true, 00:09:16.341 "zcopy": false, 00:09:16.341 "get_zone_info": false, 00:09:16.341 "zone_management": false, 00:09:16.341 "zone_append": false, 00:09:16.341 "compare": false, 00:09:16.341 "compare_and_write": false, 00:09:16.341 "abort": false, 00:09:16.341 "seek_hole": false, 00:09:16.341 "seek_data": false, 00:09:16.341 "copy": false, 00:09:16.341 "nvme_iov_md": false 00:09:16.341 }, 00:09:16.341 "memory_domains": [ 00:09:16.341 { 00:09:16.341 "dma_device_id": "system", 00:09:16.341 "dma_device_type": 1 00:09:16.341 }, 00:09:16.341 { 00:09:16.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.341 "dma_device_type": 2 00:09:16.341 }, 00:09:16.341 { 00:09:16.341 "dma_device_id": "system", 00:09:16.341 "dma_device_type": 1 00:09:16.341 }, 00:09:16.341 { 00:09:16.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.341 "dma_device_type": 2 00:09:16.341 } 00:09:16.341 ], 00:09:16.341 "driver_specific": { 00:09:16.341 "raid": { 00:09:16.341 "uuid": "904ef6da-01ea-4440-b9b0-361bc983aef0", 00:09:16.341 "strip_size_kb": 64, 00:09:16.341 "state": "online", 00:09:16.341 "raid_level": "raid0", 00:09:16.341 "superblock": true, 00:09:16.341 "num_base_bdevs": 2, 00:09:16.341 "num_base_bdevs_discovered": 2, 00:09:16.341 "num_base_bdevs_operational": 2, 00:09:16.341 "base_bdevs_list": [ 00:09:16.341 { 00:09:16.341 "name": "BaseBdev1", 00:09:16.341 "uuid": "08006883-23fc-4374-b48f-5f2ffd7d60c7", 00:09:16.341 "is_configured": true, 00:09:16.341 "data_offset": 2048, 00:09:16.341 "data_size": 63488 00:09:16.341 }, 00:09:16.341 { 00:09:16.341 "name": "BaseBdev2", 00:09:16.341 "uuid": "0fd61b7d-9370-42b4-8efa-ce70ed3a47b9", 00:09:16.341 "is_configured": true, 00:09:16.341 "data_offset": 2048, 00:09:16.341 "data_size": 63488 00:09:16.341 } 00:09:16.341 ] 00:09:16.341 } 00:09:16.341 } 00:09:16.341 }' 00:09:16.341 22:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.341 BaseBdev2' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.341 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.341 [2024-09-27 22:26:12.198062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.341 [2024-09-27 22:26:12.198097] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.341 [2024-09-27 22:26:12.198150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.601 "name": "Existed_Raid", 00:09:16.601 "uuid": "904ef6da-01ea-4440-b9b0-361bc983aef0", 00:09:16.601 "strip_size_kb": 64, 00:09:16.601 "state": "offline", 00:09:16.601 "raid_level": "raid0", 00:09:16.601 "superblock": true, 00:09:16.601 "num_base_bdevs": 2, 00:09:16.601 "num_base_bdevs_discovered": 1, 00:09:16.601 "num_base_bdevs_operational": 1, 00:09:16.601 "base_bdevs_list": [ 00:09:16.601 { 00:09:16.601 "name": null, 00:09:16.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.601 "is_configured": false, 00:09:16.601 "data_offset": 0, 00:09:16.601 "data_size": 63488 00:09:16.601 }, 00:09:16.601 { 00:09:16.601 "name": "BaseBdev2", 00:09:16.601 "uuid": "0fd61b7d-9370-42b4-8efa-ce70ed3a47b9", 00:09:16.601 "is_configured": true, 00:09:16.601 "data_offset": 2048, 00:09:16.601 "data_size": 63488 00:09:16.601 } 00:09:16.601 ] 00:09:16.601 }' 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.601 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.861 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.861 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.861 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.861 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.861 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.861 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.120 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.120 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.120 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.120 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.121 [2024-09-27 22:26:12.779183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.121 [2024-09-27 22:26:12.779384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61231 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61231 ']' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61231 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61231 00:09:17.121 killing process with pid 61231 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61231' 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61231 00:09:17.121 [2024-09-27 22:26:12.963058] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.121 22:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61231 00:09:17.121 [2024-09-27 22:26:12.981407] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.655 22:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:19.655 00:09:19.655 real 0m6.332s 00:09:19.655 user 0m8.253s 00:09:19.655 sys 0m1.133s 00:09:19.655 22:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.655 22:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 ************************************ 00:09:19.655 END TEST raid_state_function_test_sb 00:09:19.655 ************************************ 00:09:19.655 22:26:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:19.655 22:26:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:19.655 22:26:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.655 22:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 ************************************ 00:09:19.655 START TEST raid_superblock_test 00:09:19.655 ************************************ 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61500 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61500 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61500 ']' 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.655 22:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.655 [2024-09-27 22:26:15.174814] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:19.655 [2024-09-27 22:26:15.174955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61500 ] 00:09:19.655 [2024-09-27 22:26:15.340636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.915 [2024-09-27 22:26:15.572995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.173 [2024-09-27 22:26:15.798152] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.173 [2024-09-27 22:26:15.798373] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.432 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.692 malloc1 00:09:20.692 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.693 [2024-09-27 22:26:16.322566] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.693 [2024-09-27 22:26:16.322636] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.693 [2024-09-27 22:26:16.322663] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:20.693 [2024-09-27 22:26:16.322678] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.693 [2024-09-27 22:26:16.325085] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.693 [2024-09-27 22:26:16.325124] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.693 pt1 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.693 malloc2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.693 [2024-09-27 22:26:16.385178] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.693 [2024-09-27 22:26:16.385357] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.693 [2024-09-27 22:26:16.385421] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:20.693 [2024-09-27 22:26:16.385533] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.693 [2024-09-27 22:26:16.387941] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.693 [2024-09-27 22:26:16.388094] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.693 pt2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.693 [2024-09-27 22:26:16.397235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.693 [2024-09-27 22:26:16.399397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.693 [2024-09-27 22:26:16.399690] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:20.693 [2024-09-27 22:26:16.399783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:20.693 [2024-09-27 22:26:16.400083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:20.693 [2024-09-27 22:26:16.400271] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:20.693 [2024-09-27 22:26:16.400360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:20.693 [2024-09-27 22:26:16.400559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.693 "name": "raid_bdev1", 00:09:20.693 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:20.693 "strip_size_kb": 64, 00:09:20.693 "state": "online", 00:09:20.693 "raid_level": "raid0", 00:09:20.693 "superblock": true, 00:09:20.693 "num_base_bdevs": 2, 00:09:20.693 "num_base_bdevs_discovered": 2, 00:09:20.693 "num_base_bdevs_operational": 2, 00:09:20.693 "base_bdevs_list": [ 00:09:20.693 { 00:09:20.693 "name": "pt1", 00:09:20.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.693 "is_configured": true, 00:09:20.693 "data_offset": 2048, 00:09:20.693 "data_size": 63488 00:09:20.693 }, 00:09:20.693 { 00:09:20.693 "name": "pt2", 00:09:20.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.693 "is_configured": true, 00:09:20.693 "data_offset": 2048, 00:09:20.693 "data_size": 63488 00:09:20.693 } 00:09:20.693 ] 00:09:20.693 }' 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.693 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 [2024-09-27 22:26:16.853092] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.261 "name": "raid_bdev1", 00:09:21.261 "aliases": [ 00:09:21.261 "3632f8d4-9c06-4527-a004-a6237da2994e" 00:09:21.261 ], 00:09:21.261 "product_name": "Raid Volume", 00:09:21.261 "block_size": 512, 00:09:21.261 "num_blocks": 126976, 00:09:21.261 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:21.261 "assigned_rate_limits": { 00:09:21.261 "rw_ios_per_sec": 0, 00:09:21.261 "rw_mbytes_per_sec": 0, 00:09:21.261 "r_mbytes_per_sec": 0, 00:09:21.261 "w_mbytes_per_sec": 0 00:09:21.261 }, 00:09:21.261 "claimed": false, 00:09:21.261 "zoned": false, 00:09:21.261 "supported_io_types": { 00:09:21.261 "read": true, 00:09:21.261 "write": true, 00:09:21.261 "unmap": true, 00:09:21.261 "flush": true, 00:09:21.261 "reset": true, 00:09:21.261 "nvme_admin": false, 00:09:21.261 "nvme_io": false, 00:09:21.261 "nvme_io_md": false, 00:09:21.261 "write_zeroes": true, 00:09:21.261 "zcopy": false, 00:09:21.261 "get_zone_info": false, 00:09:21.261 "zone_management": false, 00:09:21.261 "zone_append": false, 00:09:21.261 "compare": false, 00:09:21.261 "compare_and_write": false, 00:09:21.261 "abort": false, 00:09:21.261 "seek_hole": false, 00:09:21.261 "seek_data": false, 00:09:21.261 "copy": false, 00:09:21.261 "nvme_iov_md": false 00:09:21.261 }, 00:09:21.261 "memory_domains": [ 00:09:21.261 { 00:09:21.261 "dma_device_id": "system", 00:09:21.261 "dma_device_type": 1 00:09:21.261 }, 00:09:21.261 { 00:09:21.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.261 "dma_device_type": 2 00:09:21.261 }, 00:09:21.261 { 00:09:21.261 "dma_device_id": "system", 00:09:21.261 "dma_device_type": 1 00:09:21.261 }, 00:09:21.261 { 00:09:21.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.261 "dma_device_type": 2 00:09:21.261 } 00:09:21.261 ], 00:09:21.261 "driver_specific": { 00:09:21.261 "raid": { 00:09:21.261 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:21.261 "strip_size_kb": 64, 00:09:21.261 "state": "online", 00:09:21.261 "raid_level": "raid0", 00:09:21.261 "superblock": true, 00:09:21.261 "num_base_bdevs": 2, 00:09:21.261 "num_base_bdevs_discovered": 2, 00:09:21.261 "num_base_bdevs_operational": 2, 00:09:21.261 "base_bdevs_list": [ 00:09:21.261 { 00:09:21.261 "name": "pt1", 00:09:21.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.261 "is_configured": true, 00:09:21.261 "data_offset": 2048, 00:09:21.261 "data_size": 63488 00:09:21.261 }, 00:09:21.261 { 00:09:21.261 "name": "pt2", 00:09:21.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.261 "is_configured": true, 00:09:21.261 "data_offset": 2048, 00:09:21.261 "data_size": 63488 00:09:21.261 } 00:09:21.261 ] 00:09:21.261 } 00:09:21.261 } 00:09:21.261 }' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.261 pt2' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 22:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 [2024-09-27 22:26:17.064727] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3632f8d4-9c06-4527-a004-a6237da2994e 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3632f8d4-9c06-4527-a004-a6237da2994e ']' 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 [2024-09-27 22:26:17.108436] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.261 [2024-09-27 22:26:17.108569] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.261 [2024-09-27 22:26:17.108713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.261 [2024-09-27 22:26:17.108789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.261 [2024-09-27 22:26:17.108954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 [2024-09-27 22:26:17.240258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:21.522 [2024-09-27 22:26:17.242525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:21.522 [2024-09-27 22:26:17.242700] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:21.522 [2024-09-27 22:26:17.242765] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:21.522 [2024-09-27 22:26:17.242782] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.522 [2024-09-27 22:26:17.242795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:21.522 request: 00:09:21.522 { 00:09:21.522 "name": "raid_bdev1", 00:09:21.522 "raid_level": "raid0", 00:09:21.522 "base_bdevs": [ 00:09:21.522 "malloc1", 00:09:21.522 "malloc2" 00:09:21.522 ], 00:09:21.522 "strip_size_kb": 64, 00:09:21.522 "superblock": false, 00:09:21.522 "method": "bdev_raid_create", 00:09:21.522 "req_id": 1 00:09:21.522 } 00:09:21.522 Got JSON-RPC error response 00:09:21.522 response: 00:09:21.522 { 00:09:21.522 "code": -17, 00:09:21.522 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:21.522 } 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 [2024-09-27 22:26:17.308153] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:21.522 [2024-09-27 22:26:17.308320] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.522 [2024-09-27 22:26:17.308347] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:21.522 [2024-09-27 22:26:17.308361] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.522 [2024-09-27 22:26:17.310821] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.522 [2024-09-27 22:26:17.310995] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:21.522 [2024-09-27 22:26:17.311099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:21.522 [2024-09-27 22:26:17.311170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:21.522 pt1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.522 "name": "raid_bdev1", 00:09:21.522 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:21.522 "strip_size_kb": 64, 00:09:21.522 "state": "configuring", 00:09:21.522 "raid_level": "raid0", 00:09:21.522 "superblock": true, 00:09:21.522 "num_base_bdevs": 2, 00:09:21.522 "num_base_bdevs_discovered": 1, 00:09:21.522 "num_base_bdevs_operational": 2, 00:09:21.522 "base_bdevs_list": [ 00:09:21.522 { 00:09:21.522 "name": "pt1", 00:09:21.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.522 "is_configured": true, 00:09:21.522 "data_offset": 2048, 00:09:21.522 "data_size": 63488 00:09:21.522 }, 00:09:21.522 { 00:09:21.522 "name": null, 00:09:21.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.522 "is_configured": false, 00:09:21.522 "data_offset": 2048, 00:09:21.522 "data_size": 63488 00:09:21.522 } 00:09:21.522 ] 00:09:21.522 }' 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.522 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 [2024-09-27 22:26:17.703684] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.091 [2024-09-27 22:26:17.703882] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.091 [2024-09-27 22:26:17.703912] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:22.091 [2024-09-27 22:26:17.703927] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.091 [2024-09-27 22:26:17.704424] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.091 [2024-09-27 22:26:17.704447] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.091 [2024-09-27 22:26:17.704527] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:22.091 [2024-09-27 22:26:17.704553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.091 [2024-09-27 22:26:17.704654] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.091 [2024-09-27 22:26:17.704667] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:22.091 [2024-09-27 22:26:17.704900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:22.091 [2024-09-27 22:26:17.705050] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.091 [2024-09-27 22:26:17.705061] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:22.091 [2024-09-27 22:26:17.705188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.091 pt2 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.091 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.092 "name": "raid_bdev1", 00:09:22.092 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:22.092 "strip_size_kb": 64, 00:09:22.092 "state": "online", 00:09:22.092 "raid_level": "raid0", 00:09:22.092 "superblock": true, 00:09:22.092 "num_base_bdevs": 2, 00:09:22.092 "num_base_bdevs_discovered": 2, 00:09:22.092 "num_base_bdevs_operational": 2, 00:09:22.092 "base_bdevs_list": [ 00:09:22.092 { 00:09:22.092 "name": "pt1", 00:09:22.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.092 "is_configured": true, 00:09:22.092 "data_offset": 2048, 00:09:22.092 "data_size": 63488 00:09:22.092 }, 00:09:22.092 { 00:09:22.092 "name": "pt2", 00:09:22.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.092 "is_configured": true, 00:09:22.092 "data_offset": 2048, 00:09:22.092 "data_size": 63488 00:09:22.092 } 00:09:22.092 ] 00:09:22.092 }' 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.092 22:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 [2024-09-27 22:26:18.115382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.351 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.351 "name": "raid_bdev1", 00:09:22.351 "aliases": [ 00:09:22.351 "3632f8d4-9c06-4527-a004-a6237da2994e" 00:09:22.351 ], 00:09:22.351 "product_name": "Raid Volume", 00:09:22.351 "block_size": 512, 00:09:22.351 "num_blocks": 126976, 00:09:22.351 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:22.351 "assigned_rate_limits": { 00:09:22.351 "rw_ios_per_sec": 0, 00:09:22.351 "rw_mbytes_per_sec": 0, 00:09:22.351 "r_mbytes_per_sec": 0, 00:09:22.351 "w_mbytes_per_sec": 0 00:09:22.351 }, 00:09:22.351 "claimed": false, 00:09:22.351 "zoned": false, 00:09:22.351 "supported_io_types": { 00:09:22.351 "read": true, 00:09:22.351 "write": true, 00:09:22.351 "unmap": true, 00:09:22.351 "flush": true, 00:09:22.351 "reset": true, 00:09:22.351 "nvme_admin": false, 00:09:22.351 "nvme_io": false, 00:09:22.351 "nvme_io_md": false, 00:09:22.351 "write_zeroes": true, 00:09:22.351 "zcopy": false, 00:09:22.351 "get_zone_info": false, 00:09:22.351 "zone_management": false, 00:09:22.351 "zone_append": false, 00:09:22.351 "compare": false, 00:09:22.351 "compare_and_write": false, 00:09:22.351 "abort": false, 00:09:22.351 "seek_hole": false, 00:09:22.351 "seek_data": false, 00:09:22.351 "copy": false, 00:09:22.351 "nvme_iov_md": false 00:09:22.352 }, 00:09:22.352 "memory_domains": [ 00:09:22.352 { 00:09:22.352 "dma_device_id": "system", 00:09:22.352 "dma_device_type": 1 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.352 "dma_device_type": 2 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "dma_device_id": "system", 00:09:22.352 "dma_device_type": 1 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.352 "dma_device_type": 2 00:09:22.352 } 00:09:22.352 ], 00:09:22.352 "driver_specific": { 00:09:22.352 "raid": { 00:09:22.352 "uuid": "3632f8d4-9c06-4527-a004-a6237da2994e", 00:09:22.352 "strip_size_kb": 64, 00:09:22.352 "state": "online", 00:09:22.352 "raid_level": "raid0", 00:09:22.352 "superblock": true, 00:09:22.352 "num_base_bdevs": 2, 00:09:22.352 "num_base_bdevs_discovered": 2, 00:09:22.352 "num_base_bdevs_operational": 2, 00:09:22.352 "base_bdevs_list": [ 00:09:22.352 { 00:09:22.352 "name": "pt1", 00:09:22.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.352 "is_configured": true, 00:09:22.352 "data_offset": 2048, 00:09:22.352 "data_size": 63488 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "name": "pt2", 00:09:22.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.352 "is_configured": true, 00:09:22.352 "data_offset": 2048, 00:09:22.352 "data_size": 63488 00:09:22.352 } 00:09:22.352 ] 00:09:22.352 } 00:09:22.352 } 00:09:22.352 }' 00:09:22.352 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.352 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:22.352 pt2' 00:09:22.352 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.611 [2024-09-27 22:26:18.335330] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3632f8d4-9c06-4527-a004-a6237da2994e '!=' 3632f8d4-9c06-4527-a004-a6237da2994e ']' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61500 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61500 ']' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61500 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61500 00:09:22.611 killing process with pid 61500 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61500' 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61500 00:09:22.611 [2024-09-27 22:26:18.411390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.611 [2024-09-27 22:26:18.411497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.611 22:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61500 00:09:22.611 [2024-09-27 22:26:18.411553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.611 [2024-09-27 22:26:18.411568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:22.871 [2024-09-27 22:26:18.615359] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.774 22:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:24.774 00:09:24.774 real 0m5.519s 00:09:24.775 user 0m7.189s 00:09:24.775 sys 0m0.912s 00:09:24.775 ************************************ 00:09:24.775 END TEST raid_superblock_test 00:09:24.775 ************************************ 00:09:24.775 22:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.775 22:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.033 22:26:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:25.033 22:26:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:25.033 22:26:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.033 22:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.033 ************************************ 00:09:25.033 START TEST raid_read_error_test 00:09:25.033 ************************************ 00:09:25.033 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GQR8xsTueV 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61717 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61717 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61717 ']' 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.034 22:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.034 [2024-09-27 22:26:20.781680] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:25.034 [2024-09-27 22:26:20.781812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:09:25.292 [2024-09-27 22:26:20.951908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.550 [2024-09-27 22:26:21.181166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.550 [2024-09-27 22:26:21.415437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.550 [2024-09-27 22:26:21.415664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.118 BaseBdev1_malloc 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.118 true 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.118 [2024-09-27 22:26:21.955324] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.118 [2024-09-27 22:26:21.955523] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.118 [2024-09-27 22:26:21.955558] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.118 [2024-09-27 22:26:21.955574] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.118 [2024-09-27 22:26:21.958115] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.118 [2024-09-27 22:26:21.958157] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.118 BaseBdev1 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.118 22:26:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.377 BaseBdev2_malloc 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.377 true 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.377 [2024-09-27 22:26:22.027978] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.377 [2024-09-27 22:26:22.028057] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.377 [2024-09-27 22:26:22.028080] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.377 [2024-09-27 22:26:22.028094] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.377 [2024-09-27 22:26:22.030573] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.377 [2024-09-27 22:26:22.030622] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.377 BaseBdev2 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.377 [2024-09-27 22:26:22.040052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.377 [2024-09-27 22:26:22.042263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.377 [2024-09-27 22:26:22.042600] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.377 [2024-09-27 22:26:22.042747] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:26.377 [2024-09-27 22:26:22.043069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:26.377 [2024-09-27 22:26:22.043241] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.377 [2024-09-27 22:26:22.043253] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.377 [2024-09-27 22:26:22.043426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.377 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.377 "name": "raid_bdev1", 00:09:26.377 "uuid": "a51022dc-efa4-468e-8c18-d83a2e4ce562", 00:09:26.377 "strip_size_kb": 64, 00:09:26.377 "state": "online", 00:09:26.377 "raid_level": "raid0", 00:09:26.378 "superblock": true, 00:09:26.378 "num_base_bdevs": 2, 00:09:26.378 "num_base_bdevs_discovered": 2, 00:09:26.378 "num_base_bdevs_operational": 2, 00:09:26.378 "base_bdevs_list": [ 00:09:26.378 { 00:09:26.378 "name": "BaseBdev1", 00:09:26.378 "uuid": "656cb776-b583-5c56-8e47-705eb4598842", 00:09:26.378 "is_configured": true, 00:09:26.378 "data_offset": 2048, 00:09:26.378 "data_size": 63488 00:09:26.378 }, 00:09:26.378 { 00:09:26.378 "name": "BaseBdev2", 00:09:26.378 "uuid": "c272bb67-e60b-5c9c-b579-8168030174d7", 00:09:26.378 "is_configured": true, 00:09:26.378 "data_offset": 2048, 00:09:26.378 "data_size": 63488 00:09:26.378 } 00:09:26.378 ] 00:09:26.378 }' 00:09:26.378 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.378 22:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.636 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.636 22:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.900 [2024-09-27 22:26:22.568927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.834 "name": "raid_bdev1", 00:09:27.834 "uuid": "a51022dc-efa4-468e-8c18-d83a2e4ce562", 00:09:27.834 "strip_size_kb": 64, 00:09:27.834 "state": "online", 00:09:27.834 "raid_level": "raid0", 00:09:27.834 "superblock": true, 00:09:27.834 "num_base_bdevs": 2, 00:09:27.834 "num_base_bdevs_discovered": 2, 00:09:27.834 "num_base_bdevs_operational": 2, 00:09:27.834 "base_bdevs_list": [ 00:09:27.834 { 00:09:27.834 "name": "BaseBdev1", 00:09:27.834 "uuid": "656cb776-b583-5c56-8e47-705eb4598842", 00:09:27.834 "is_configured": true, 00:09:27.834 "data_offset": 2048, 00:09:27.834 "data_size": 63488 00:09:27.834 }, 00:09:27.834 { 00:09:27.834 "name": "BaseBdev2", 00:09:27.834 "uuid": "c272bb67-e60b-5c9c-b579-8168030174d7", 00:09:27.834 "is_configured": true, 00:09:27.834 "data_offset": 2048, 00:09:27.834 "data_size": 63488 00:09:27.834 } 00:09:27.834 ] 00:09:27.834 }' 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.834 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.122 [2024-09-27 22:26:23.895061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.122 [2024-09-27 22:26:23.895996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.122 [2024-09-27 22:26:23.898620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.122 [2024-09-27 22:26:23.898664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.122 [2024-09-27 22:26:23.898695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.122 [2024-09-27 22:26:23.898710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:28.122 { 00:09:28.122 "results": [ 00:09:28.122 { 00:09:28.122 "job": "raid_bdev1", 00:09:28.122 "core_mask": "0x1", 00:09:28.122 "workload": "randrw", 00:09:28.122 "percentage": 50, 00:09:28.122 "status": "finished", 00:09:28.122 "queue_depth": 1, 00:09:28.122 "io_size": 131072, 00:09:28.122 "runtime": 1.326923, 00:09:28.122 "iops": 16302.37775665958, 00:09:28.122 "mibps": 2037.7972195824475, 00:09:28.122 "io_failed": 1, 00:09:28.122 "io_timeout": 0, 00:09:28.122 "avg_latency_us": 84.30902765130693, 00:09:28.122 "min_latency_us": 27.759036144578314, 00:09:28.122 "max_latency_us": 1421.2626506024096 00:09:28.122 } 00:09:28.122 ], 00:09:28.122 "core_count": 1 00:09:28.122 } 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61717 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61717 ']' 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61717 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61717 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.122 killing process with pid 61717 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61717' 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61717 00:09:28.122 [2024-09-27 22:26:23.941229] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.122 22:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61717 00:09:28.379 [2024-09-27 22:26:24.082125] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GQR8xsTueV 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.278 ************************************ 00:09:30.278 END TEST raid_read_error_test 00:09:30.278 ************************************ 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:30.278 00:09:30.278 real 0m5.469s 00:09:30.278 user 0m6.158s 00:09:30.278 sys 0m0.683s 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.278 22:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.536 22:26:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:30.536 22:26:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:30.536 22:26:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.536 22:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.536 ************************************ 00:09:30.536 START TEST raid_write_error_test 00:09:30.536 ************************************ 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.536 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QJLTWrkAgy 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61868 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61868 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61868 ']' 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.537 22:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.537 [2024-09-27 22:26:26.329706] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:30.537 [2024-09-27 22:26:26.329827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61868 ] 00:09:30.795 [2024-09-27 22:26:26.501439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.054 [2024-09-27 22:26:26.740893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.312 [2024-09-27 22:26:26.982864] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.312 [2024-09-27 22:26:26.983128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.571 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.571 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:31.571 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.571 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:31.571 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.571 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 BaseBdev1_malloc 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 true 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 [2024-09-27 22:26:27.512629] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:31.830 [2024-09-27 22:26:27.512693] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.830 [2024-09-27 22:26:27.512713] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:31.830 [2024-09-27 22:26:27.512729] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.830 [2024-09-27 22:26:27.515151] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.830 [2024-09-27 22:26:27.515330] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:31.830 BaseBdev1 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 BaseBdev2_malloc 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 true 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 [2024-09-27 22:26:27.582825] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:31.830 [2024-09-27 22:26:27.582894] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.830 [2024-09-27 22:26:27.582916] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:31.830 [2024-09-27 22:26:27.582938] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.830 [2024-09-27 22:26:27.585515] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.830 [2024-09-27 22:26:27.585561] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:31.830 BaseBdev2 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 [2024-09-27 22:26:27.594874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.830 [2024-09-27 22:26:27.597213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.830 [2024-09-27 22:26:27.597528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.830 [2024-09-27 22:26:27.597627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:31.830 [2024-09-27 22:26:27.597924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:31.830 [2024-09-27 22:26:27.598137] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.830 [2024-09-27 22:26:27.598180] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:31.830 [2024-09-27 22:26:27.598465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.830 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.830 "name": "raid_bdev1", 00:09:31.830 "uuid": "47cce478-a53a-4163-b84e-49eee3b4b15c", 00:09:31.830 "strip_size_kb": 64, 00:09:31.830 "state": "online", 00:09:31.830 "raid_level": "raid0", 00:09:31.830 "superblock": true, 00:09:31.830 "num_base_bdevs": 2, 00:09:31.830 "num_base_bdevs_discovered": 2, 00:09:31.830 "num_base_bdevs_operational": 2, 00:09:31.830 "base_bdevs_list": [ 00:09:31.831 { 00:09:31.831 "name": "BaseBdev1", 00:09:31.831 "uuid": "98b914ef-db7e-5106-b967-20264819871f", 00:09:31.831 "is_configured": true, 00:09:31.831 "data_offset": 2048, 00:09:31.831 "data_size": 63488 00:09:31.831 }, 00:09:31.831 { 00:09:31.831 "name": "BaseBdev2", 00:09:31.831 "uuid": "80c46770-ffe0-5830-ad4f-1760f5efc136", 00:09:31.831 "is_configured": true, 00:09:31.831 "data_offset": 2048, 00:09:31.831 "data_size": 63488 00:09:31.831 } 00:09:31.831 ] 00:09:31.831 }' 00:09:31.831 22:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.831 22:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.408 22:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:32.408 22:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:32.408 [2024-09-27 22:26:28.115734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.346 "name": "raid_bdev1", 00:09:33.346 "uuid": "47cce478-a53a-4163-b84e-49eee3b4b15c", 00:09:33.346 "strip_size_kb": 64, 00:09:33.346 "state": "online", 00:09:33.346 "raid_level": "raid0", 00:09:33.346 "superblock": true, 00:09:33.346 "num_base_bdevs": 2, 00:09:33.346 "num_base_bdevs_discovered": 2, 00:09:33.346 "num_base_bdevs_operational": 2, 00:09:33.346 "base_bdevs_list": [ 00:09:33.346 { 00:09:33.346 "name": "BaseBdev1", 00:09:33.346 "uuid": "98b914ef-db7e-5106-b967-20264819871f", 00:09:33.346 "is_configured": true, 00:09:33.346 "data_offset": 2048, 00:09:33.346 "data_size": 63488 00:09:33.346 }, 00:09:33.346 { 00:09:33.346 "name": "BaseBdev2", 00:09:33.346 "uuid": "80c46770-ffe0-5830-ad4f-1760f5efc136", 00:09:33.346 "is_configured": true, 00:09:33.346 "data_offset": 2048, 00:09:33.346 "data_size": 63488 00:09:33.346 } 00:09:33.346 ] 00:09:33.346 }' 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.346 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.914 [2024-09-27 22:26:29.489896] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.914 [2024-09-27 22:26:29.490084] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.914 [2024-09-27 22:26:29.492954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.914 { 00:09:33.914 "results": [ 00:09:33.914 { 00:09:33.914 "job": "raid_bdev1", 00:09:33.914 "core_mask": "0x1", 00:09:33.914 "workload": "randrw", 00:09:33.914 "percentage": 50, 00:09:33.914 "status": "finished", 00:09:33.914 "queue_depth": 1, 00:09:33.914 "io_size": 131072, 00:09:33.914 "runtime": 1.374399, 00:09:33.914 "iops": 16204.901196814026, 00:09:33.914 "mibps": 2025.6126496017532, 00:09:33.914 "io_failed": 1, 00:09:33.914 "io_timeout": 0, 00:09:33.914 "avg_latency_us": 84.8007440348202, 00:09:33.914 "min_latency_us": 27.759036144578314, 00:09:33.914 "max_latency_us": 1394.9429718875501 00:09:33.914 } 00:09:33.914 ], 00:09:33.914 "core_count": 1 00:09:33.914 } 00:09:33.914 [2024-09-27 22:26:29.493130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.914 [2024-09-27 22:26:29.493176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.914 [2024-09-27 22:26:29.493192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61868 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61868 ']' 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61868 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61868 00:09:33.914 killing process with pid 61868 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61868' 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61868 00:09:33.914 [2024-09-27 22:26:29.536049] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.914 22:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61868 00:09:33.914 [2024-09-27 22:26:29.682700] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QJLTWrkAgy 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:36.451 ************************************ 00:09:36.451 END TEST raid_write_error_test 00:09:36.451 ************************************ 00:09:36.451 00:09:36.451 real 0m5.688s 00:09:36.451 user 0m6.428s 00:09:36.451 sys 0m0.704s 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.451 22:26:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.451 22:26:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:36.451 22:26:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:36.451 22:26:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:36.451 22:26:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.451 22:26:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.451 ************************************ 00:09:36.451 START TEST raid_state_function_test 00:09:36.451 ************************************ 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:36.451 Process raid pid: 62028 00:09:36.451 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62028 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62028' 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62028 00:09:36.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62028 ']' 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.452 22:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.452 [2024-09-27 22:26:32.087633] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:36.452 [2024-09-27 22:26:32.088448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.452 [2024-09-27 22:26:32.280626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.711 [2024-09-27 22:26:32.537266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.970 [2024-09-27 22:26:32.799644] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.970 [2024-09-27 22:26:32.799877] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.538 [2024-09-27 22:26:33.296418] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.538 [2024-09-27 22:26:33.297497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.538 [2024-09-27 22:26:33.297527] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.538 [2024-09-27 22:26:33.297544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.538 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.538 "name": "Existed_Raid", 00:09:37.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.538 "strip_size_kb": 64, 00:09:37.538 "state": "configuring", 00:09:37.538 "raid_level": "concat", 00:09:37.538 "superblock": false, 00:09:37.538 "num_base_bdevs": 2, 00:09:37.538 "num_base_bdevs_discovered": 0, 00:09:37.538 "num_base_bdevs_operational": 2, 00:09:37.538 "base_bdevs_list": [ 00:09:37.538 { 00:09:37.538 "name": "BaseBdev1", 00:09:37.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.538 "is_configured": false, 00:09:37.538 "data_offset": 0, 00:09:37.538 "data_size": 0 00:09:37.538 }, 00:09:37.538 { 00:09:37.538 "name": "BaseBdev2", 00:09:37.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.538 "is_configured": false, 00:09:37.539 "data_offset": 0, 00:09:37.539 "data_size": 0 00:09:37.539 } 00:09:37.539 ] 00:09:37.539 }' 00:09:37.539 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.539 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 [2024-09-27 22:26:33.743569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.106 [2024-09-27 22:26:33.743745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 [2024-09-27 22:26:33.755568] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.106 [2024-09-27 22:26:33.755739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.106 [2024-09-27 22:26:33.755831] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.106 [2024-09-27 22:26:33.755879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 [2024-09-27 22:26:33.812485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.106 BaseBdev1 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.106 [ 00:09:38.106 { 00:09:38.106 "name": "BaseBdev1", 00:09:38.106 "aliases": [ 00:09:38.106 "25bc48e3-bacf-4dc1-a0a5-358b9ca1d4bf" 00:09:38.106 ], 00:09:38.106 "product_name": "Malloc disk", 00:09:38.106 "block_size": 512, 00:09:38.106 "num_blocks": 65536, 00:09:38.106 "uuid": "25bc48e3-bacf-4dc1-a0a5-358b9ca1d4bf", 00:09:38.106 "assigned_rate_limits": { 00:09:38.106 "rw_ios_per_sec": 0, 00:09:38.106 "rw_mbytes_per_sec": 0, 00:09:38.106 "r_mbytes_per_sec": 0, 00:09:38.106 "w_mbytes_per_sec": 0 00:09:38.106 }, 00:09:38.106 "claimed": true, 00:09:38.106 "claim_type": "exclusive_write", 00:09:38.106 "zoned": false, 00:09:38.106 "supported_io_types": { 00:09:38.106 "read": true, 00:09:38.106 "write": true, 00:09:38.106 "unmap": true, 00:09:38.106 "flush": true, 00:09:38.106 "reset": true, 00:09:38.106 "nvme_admin": false, 00:09:38.106 "nvme_io": false, 00:09:38.106 "nvme_io_md": false, 00:09:38.106 "write_zeroes": true, 00:09:38.106 "zcopy": true, 00:09:38.106 "get_zone_info": false, 00:09:38.106 "zone_management": false, 00:09:38.106 "zone_append": false, 00:09:38.106 "compare": false, 00:09:38.106 "compare_and_write": false, 00:09:38.106 "abort": true, 00:09:38.106 "seek_hole": false, 00:09:38.106 "seek_data": false, 00:09:38.106 "copy": true, 00:09:38.106 "nvme_iov_md": false 00:09:38.106 }, 00:09:38.106 "memory_domains": [ 00:09:38.106 { 00:09:38.106 "dma_device_id": "system", 00:09:38.106 "dma_device_type": 1 00:09:38.106 }, 00:09:38.106 { 00:09:38.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.106 "dma_device_type": 2 00:09:38.106 } 00:09:38.106 ], 00:09:38.106 "driver_specific": {} 00:09:38.106 } 00:09:38.106 ] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:38.106 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.107 "name": "Existed_Raid", 00:09:38.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.107 "strip_size_kb": 64, 00:09:38.107 "state": "configuring", 00:09:38.107 "raid_level": "concat", 00:09:38.107 "superblock": false, 00:09:38.107 "num_base_bdevs": 2, 00:09:38.107 "num_base_bdevs_discovered": 1, 00:09:38.107 "num_base_bdevs_operational": 2, 00:09:38.107 "base_bdevs_list": [ 00:09:38.107 { 00:09:38.107 "name": "BaseBdev1", 00:09:38.107 "uuid": "25bc48e3-bacf-4dc1-a0a5-358b9ca1d4bf", 00:09:38.107 "is_configured": true, 00:09:38.107 "data_offset": 0, 00:09:38.107 "data_size": 65536 00:09:38.107 }, 00:09:38.107 { 00:09:38.107 "name": "BaseBdev2", 00:09:38.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.107 "is_configured": false, 00:09:38.107 "data_offset": 0, 00:09:38.107 "data_size": 0 00:09:38.107 } 00:09:38.107 ] 00:09:38.107 }' 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.107 22:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.673 [2024-09-27 22:26:34.283918] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.673 [2024-09-27 22:26:34.284158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.673 [2024-09-27 22:26:34.295930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.673 [2024-09-27 22:26:34.298353] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.673 [2024-09-27 22:26:34.298526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.673 "name": "Existed_Raid", 00:09:38.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.673 "strip_size_kb": 64, 00:09:38.673 "state": "configuring", 00:09:38.673 "raid_level": "concat", 00:09:38.673 "superblock": false, 00:09:38.673 "num_base_bdevs": 2, 00:09:38.673 "num_base_bdevs_discovered": 1, 00:09:38.673 "num_base_bdevs_operational": 2, 00:09:38.673 "base_bdevs_list": [ 00:09:38.673 { 00:09:38.673 "name": "BaseBdev1", 00:09:38.673 "uuid": "25bc48e3-bacf-4dc1-a0a5-358b9ca1d4bf", 00:09:38.673 "is_configured": true, 00:09:38.673 "data_offset": 0, 00:09:38.673 "data_size": 65536 00:09:38.673 }, 00:09:38.673 { 00:09:38.673 "name": "BaseBdev2", 00:09:38.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.673 "is_configured": false, 00:09:38.673 "data_offset": 0, 00:09:38.673 "data_size": 0 00:09:38.673 } 00:09:38.673 ] 00:09:38.673 }' 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.673 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.931 [2024-09-27 22:26:34.781721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.931 [2024-09-27 22:26:34.782020] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.931 [2024-09-27 22:26:34.782041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:38.931 [2024-09-27 22:26:34.782367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.931 [2024-09-27 22:26:34.782536] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.931 [2024-09-27 22:26:34.782553] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.931 [2024-09-27 22:26:34.782865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.931 BaseBdev2 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.931 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.190 [ 00:09:39.190 { 00:09:39.190 "name": "BaseBdev2", 00:09:39.191 "aliases": [ 00:09:39.191 "d1227ec7-a726-4698-bebc-627eb4e83912" 00:09:39.191 ], 00:09:39.191 "product_name": "Malloc disk", 00:09:39.191 "block_size": 512, 00:09:39.191 "num_blocks": 65536, 00:09:39.191 "uuid": "d1227ec7-a726-4698-bebc-627eb4e83912", 00:09:39.191 "assigned_rate_limits": { 00:09:39.191 "rw_ios_per_sec": 0, 00:09:39.191 "rw_mbytes_per_sec": 0, 00:09:39.191 "r_mbytes_per_sec": 0, 00:09:39.191 "w_mbytes_per_sec": 0 00:09:39.191 }, 00:09:39.191 "claimed": true, 00:09:39.191 "claim_type": "exclusive_write", 00:09:39.191 "zoned": false, 00:09:39.191 "supported_io_types": { 00:09:39.191 "read": true, 00:09:39.191 "write": true, 00:09:39.191 "unmap": true, 00:09:39.191 "flush": true, 00:09:39.191 "reset": true, 00:09:39.191 "nvme_admin": false, 00:09:39.191 "nvme_io": false, 00:09:39.191 "nvme_io_md": false, 00:09:39.191 "write_zeroes": true, 00:09:39.191 "zcopy": true, 00:09:39.191 "get_zone_info": false, 00:09:39.191 "zone_management": false, 00:09:39.191 "zone_append": false, 00:09:39.191 "compare": false, 00:09:39.191 "compare_and_write": false, 00:09:39.191 "abort": true, 00:09:39.191 "seek_hole": false, 00:09:39.191 "seek_data": false, 00:09:39.191 "copy": true, 00:09:39.191 "nvme_iov_md": false 00:09:39.191 }, 00:09:39.191 "memory_domains": [ 00:09:39.191 { 00:09:39.191 "dma_device_id": "system", 00:09:39.191 "dma_device_type": 1 00:09:39.191 }, 00:09:39.191 { 00:09:39.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.191 "dma_device_type": 2 00:09:39.191 } 00:09:39.191 ], 00:09:39.191 "driver_specific": {} 00:09:39.191 } 00:09:39.191 ] 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.191 "name": "Existed_Raid", 00:09:39.191 "uuid": "965b7aca-a8c0-4afb-bd90-5a724d12874f", 00:09:39.191 "strip_size_kb": 64, 00:09:39.191 "state": "online", 00:09:39.191 "raid_level": "concat", 00:09:39.191 "superblock": false, 00:09:39.191 "num_base_bdevs": 2, 00:09:39.191 "num_base_bdevs_discovered": 2, 00:09:39.191 "num_base_bdevs_operational": 2, 00:09:39.191 "base_bdevs_list": [ 00:09:39.191 { 00:09:39.191 "name": "BaseBdev1", 00:09:39.191 "uuid": "25bc48e3-bacf-4dc1-a0a5-358b9ca1d4bf", 00:09:39.191 "is_configured": true, 00:09:39.191 "data_offset": 0, 00:09:39.191 "data_size": 65536 00:09:39.191 }, 00:09:39.191 { 00:09:39.191 "name": "BaseBdev2", 00:09:39.191 "uuid": "d1227ec7-a726-4698-bebc-627eb4e83912", 00:09:39.191 "is_configured": true, 00:09:39.191 "data_offset": 0, 00:09:39.191 "data_size": 65536 00:09:39.191 } 00:09:39.191 ] 00:09:39.191 }' 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.191 22:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.450 [2024-09-27 22:26:35.285406] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.450 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.450 "name": "Existed_Raid", 00:09:39.450 "aliases": [ 00:09:39.450 "965b7aca-a8c0-4afb-bd90-5a724d12874f" 00:09:39.450 ], 00:09:39.450 "product_name": "Raid Volume", 00:09:39.450 "block_size": 512, 00:09:39.450 "num_blocks": 131072, 00:09:39.450 "uuid": "965b7aca-a8c0-4afb-bd90-5a724d12874f", 00:09:39.450 "assigned_rate_limits": { 00:09:39.450 "rw_ios_per_sec": 0, 00:09:39.450 "rw_mbytes_per_sec": 0, 00:09:39.450 "r_mbytes_per_sec": 0, 00:09:39.450 "w_mbytes_per_sec": 0 00:09:39.450 }, 00:09:39.450 "claimed": false, 00:09:39.450 "zoned": false, 00:09:39.450 "supported_io_types": { 00:09:39.450 "read": true, 00:09:39.450 "write": true, 00:09:39.450 "unmap": true, 00:09:39.450 "flush": true, 00:09:39.450 "reset": true, 00:09:39.450 "nvme_admin": false, 00:09:39.450 "nvme_io": false, 00:09:39.450 "nvme_io_md": false, 00:09:39.450 "write_zeroes": true, 00:09:39.450 "zcopy": false, 00:09:39.450 "get_zone_info": false, 00:09:39.450 "zone_management": false, 00:09:39.450 "zone_append": false, 00:09:39.450 "compare": false, 00:09:39.450 "compare_and_write": false, 00:09:39.450 "abort": false, 00:09:39.450 "seek_hole": false, 00:09:39.450 "seek_data": false, 00:09:39.450 "copy": false, 00:09:39.450 "nvme_iov_md": false 00:09:39.450 }, 00:09:39.450 "memory_domains": [ 00:09:39.450 { 00:09:39.450 "dma_device_id": "system", 00:09:39.450 "dma_device_type": 1 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.450 "dma_device_type": 2 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "system", 00:09:39.450 "dma_device_type": 1 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.450 "dma_device_type": 2 00:09:39.450 } 00:09:39.450 ], 00:09:39.450 "driver_specific": { 00:09:39.450 "raid": { 00:09:39.450 "uuid": "965b7aca-a8c0-4afb-bd90-5a724d12874f", 00:09:39.450 "strip_size_kb": 64, 00:09:39.450 "state": "online", 00:09:39.450 "raid_level": "concat", 00:09:39.450 "superblock": false, 00:09:39.450 "num_base_bdevs": 2, 00:09:39.450 "num_base_bdevs_discovered": 2, 00:09:39.450 "num_base_bdevs_operational": 2, 00:09:39.450 "base_bdevs_list": [ 00:09:39.450 { 00:09:39.450 "name": "BaseBdev1", 00:09:39.450 "uuid": "25bc48e3-bacf-4dc1-a0a5-358b9ca1d4bf", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev2", 00:09:39.450 "uuid": "d1227ec7-a726-4698-bebc-627eb4e83912", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 } 00:09:39.450 ] 00:09:39.450 } 00:09:39.450 } 00:09:39.450 }' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:39.710 BaseBdev2' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.710 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 [2024-09-27 22:26:35.536862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.710 [2024-09-27 22:26:35.537057] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.710 [2024-09-27 22:26:35.537204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:39.969 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.970 "name": "Existed_Raid", 00:09:39.970 "uuid": "965b7aca-a8c0-4afb-bd90-5a724d12874f", 00:09:39.970 "strip_size_kb": 64, 00:09:39.970 "state": "offline", 00:09:39.970 "raid_level": "concat", 00:09:39.970 "superblock": false, 00:09:39.970 "num_base_bdevs": 2, 00:09:39.970 "num_base_bdevs_discovered": 1, 00:09:39.970 "num_base_bdevs_operational": 1, 00:09:39.970 "base_bdevs_list": [ 00:09:39.970 { 00:09:39.970 "name": null, 00:09:39.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.970 "is_configured": false, 00:09:39.970 "data_offset": 0, 00:09:39.970 "data_size": 65536 00:09:39.970 }, 00:09:39.970 { 00:09:39.970 "name": "BaseBdev2", 00:09:39.970 "uuid": "d1227ec7-a726-4698-bebc-627eb4e83912", 00:09:39.970 "is_configured": true, 00:09:39.970 "data_offset": 0, 00:09:39.970 "data_size": 65536 00:09:39.970 } 00:09:39.970 ] 00:09:39.970 }' 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.970 22:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.228 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.486 [2024-09-27 22:26:36.124174] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.486 [2024-09-27 22:26:36.124353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62028 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62028 ']' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62028 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62028 00:09:40.486 killing process with pid 62028 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62028' 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62028 00:09:40.486 [2024-09-27 22:26:36.325611] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.486 22:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62028 00:09:40.486 [2024-09-27 22:26:36.344851] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.026 00:09:43.026 real 0m6.500s 00:09:43.026 user 0m8.653s 00:09:43.026 sys 0m1.054s 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.026 ************************************ 00:09:43.026 END TEST raid_state_function_test 00:09:43.026 ************************************ 00:09:43.026 22:26:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:43.026 22:26:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:43.026 22:26:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.026 22:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.026 ************************************ 00:09:43.026 START TEST raid_state_function_test_sb 00:09:43.026 ************************************ 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62292 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.026 Process raid pid: 62292 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62292' 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62292 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62292 ']' 00:09:43.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.026 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.027 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.027 22:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.027 [2024-09-27 22:26:38.659526] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:43.027 [2024-09-27 22:26:38.659872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.027 [2024-09-27 22:26:38.838374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.286 [2024-09-27 22:26:39.089681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.546 [2024-09-27 22:26:39.344212] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.546 [2024-09-27 22:26:39.344248] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.123 [2024-09-27 22:26:39.847260] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.123 [2024-09-27 22:26:39.847462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.123 [2024-09-27 22:26:39.847566] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.123 [2024-09-27 22:26:39.847683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.123 "name": "Existed_Raid", 00:09:44.123 "uuid": "c2a976df-52cb-4d9b-94a1-dad1957efc05", 00:09:44.123 "strip_size_kb": 64, 00:09:44.123 "state": "configuring", 00:09:44.123 "raid_level": "concat", 00:09:44.123 "superblock": true, 00:09:44.123 "num_base_bdevs": 2, 00:09:44.123 "num_base_bdevs_discovered": 0, 00:09:44.123 "num_base_bdevs_operational": 2, 00:09:44.123 "base_bdevs_list": [ 00:09:44.123 { 00:09:44.123 "name": "BaseBdev1", 00:09:44.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.123 "is_configured": false, 00:09:44.123 "data_offset": 0, 00:09:44.123 "data_size": 0 00:09:44.123 }, 00:09:44.123 { 00:09:44.123 "name": "BaseBdev2", 00:09:44.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.123 "is_configured": false, 00:09:44.123 "data_offset": 0, 00:09:44.123 "data_size": 0 00:09:44.123 } 00:09:44.123 ] 00:09:44.123 }' 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.123 22:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.703 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.703 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.704 [2024-09-27 22:26:40.299165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.704 [2024-09-27 22:26:40.299348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.704 [2024-09-27 22:26:40.311170] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.704 [2024-09-27 22:26:40.311351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.704 [2024-09-27 22:26:40.311441] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.704 [2024-09-27 22:26:40.311496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.704 BaseBdev1 00:09:44.704 [2024-09-27 22:26:40.367253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.704 [ 00:09:44.704 { 00:09:44.704 "name": "BaseBdev1", 00:09:44.704 "aliases": [ 00:09:44.704 "d428d333-1d99-4473-aac4-bfa6b0df048e" 00:09:44.704 ], 00:09:44.704 "product_name": "Malloc disk", 00:09:44.704 "block_size": 512, 00:09:44.704 "num_blocks": 65536, 00:09:44.704 "uuid": "d428d333-1d99-4473-aac4-bfa6b0df048e", 00:09:44.704 "assigned_rate_limits": { 00:09:44.704 "rw_ios_per_sec": 0, 00:09:44.704 "rw_mbytes_per_sec": 0, 00:09:44.704 "r_mbytes_per_sec": 0, 00:09:44.704 "w_mbytes_per_sec": 0 00:09:44.704 }, 00:09:44.704 "claimed": true, 00:09:44.704 "claim_type": "exclusive_write", 00:09:44.704 "zoned": false, 00:09:44.704 "supported_io_types": { 00:09:44.704 "read": true, 00:09:44.704 "write": true, 00:09:44.704 "unmap": true, 00:09:44.704 "flush": true, 00:09:44.704 "reset": true, 00:09:44.704 "nvme_admin": false, 00:09:44.704 "nvme_io": false, 00:09:44.704 "nvme_io_md": false, 00:09:44.704 "write_zeroes": true, 00:09:44.704 "zcopy": true, 00:09:44.704 "get_zone_info": false, 00:09:44.704 "zone_management": false, 00:09:44.704 "zone_append": false, 00:09:44.704 "compare": false, 00:09:44.704 "compare_and_write": false, 00:09:44.704 "abort": true, 00:09:44.704 "seek_hole": false, 00:09:44.704 "seek_data": false, 00:09:44.704 "copy": true, 00:09:44.704 "nvme_iov_md": false 00:09:44.704 }, 00:09:44.704 "memory_domains": [ 00:09:44.704 { 00:09:44.704 "dma_device_id": "system", 00:09:44.704 "dma_device_type": 1 00:09:44.704 }, 00:09:44.704 { 00:09:44.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.704 "dma_device_type": 2 00:09:44.704 } 00:09:44.704 ], 00:09:44.704 "driver_specific": {} 00:09:44.704 } 00:09:44.704 ] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.704 "name": "Existed_Raid", 00:09:44.704 "uuid": "9bc4d82d-4ecf-4f37-b1cf-41d369b60c3e", 00:09:44.704 "strip_size_kb": 64, 00:09:44.704 "state": "configuring", 00:09:44.704 "raid_level": "concat", 00:09:44.704 "superblock": true, 00:09:44.704 "num_base_bdevs": 2, 00:09:44.704 "num_base_bdevs_discovered": 1, 00:09:44.704 "num_base_bdevs_operational": 2, 00:09:44.704 "base_bdevs_list": [ 00:09:44.704 { 00:09:44.704 "name": "BaseBdev1", 00:09:44.704 "uuid": "d428d333-1d99-4473-aac4-bfa6b0df048e", 00:09:44.704 "is_configured": true, 00:09:44.704 "data_offset": 2048, 00:09:44.704 "data_size": 63488 00:09:44.704 }, 00:09:44.704 { 00:09:44.704 "name": "BaseBdev2", 00:09:44.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.704 "is_configured": false, 00:09:44.704 "data_offset": 0, 00:09:44.704 "data_size": 0 00:09:44.704 } 00:09:44.704 ] 00:09:44.704 }' 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.704 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.964 [2024-09-27 22:26:40.807157] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.964 [2024-09-27 22:26:40.807212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.964 [2024-09-27 22:26:40.819179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.964 [2024-09-27 22:26:40.821446] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.964 [2024-09-27 22:26:40.821496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.964 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.224 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.224 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.224 "name": "Existed_Raid", 00:09:45.224 "uuid": "acb3de85-f0fb-42f6-9b89-a2f59810ff41", 00:09:45.224 "strip_size_kb": 64, 00:09:45.224 "state": "configuring", 00:09:45.224 "raid_level": "concat", 00:09:45.224 "superblock": true, 00:09:45.224 "num_base_bdevs": 2, 00:09:45.224 "num_base_bdevs_discovered": 1, 00:09:45.224 "num_base_bdevs_operational": 2, 00:09:45.224 "base_bdevs_list": [ 00:09:45.224 { 00:09:45.224 "name": "BaseBdev1", 00:09:45.224 "uuid": "d428d333-1d99-4473-aac4-bfa6b0df048e", 00:09:45.224 "is_configured": true, 00:09:45.224 "data_offset": 2048, 00:09:45.224 "data_size": 63488 00:09:45.224 }, 00:09:45.224 { 00:09:45.224 "name": "BaseBdev2", 00:09:45.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.224 "is_configured": false, 00:09:45.224 "data_offset": 0, 00:09:45.224 "data_size": 0 00:09:45.224 } 00:09:45.224 ] 00:09:45.224 }' 00:09:45.224 22:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.224 22:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.484 [2024-09-27 22:26:41.289701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.484 BaseBdev2 00:09:45.484 [2024-09-27 22:26:41.290357] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.484 [2024-09-27 22:26:41.290386] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:45.484 [2024-09-27 22:26:41.290764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.484 [2024-09-27 22:26:41.290939] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.484 [2024-09-27 22:26:41.290958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:45.484 [2024-09-27 22:26:41.291176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.484 [ 00:09:45.484 { 00:09:45.484 "name": "BaseBdev2", 00:09:45.484 "aliases": [ 00:09:45.484 "3076bb50-f1ad-4d0a-b0bd-47f7c5200bc1" 00:09:45.484 ], 00:09:45.484 "product_name": "Malloc disk", 00:09:45.484 "block_size": 512, 00:09:45.484 "num_blocks": 65536, 00:09:45.484 "uuid": "3076bb50-f1ad-4d0a-b0bd-47f7c5200bc1", 00:09:45.484 "assigned_rate_limits": { 00:09:45.484 "rw_ios_per_sec": 0, 00:09:45.484 "rw_mbytes_per_sec": 0, 00:09:45.484 "r_mbytes_per_sec": 0, 00:09:45.484 "w_mbytes_per_sec": 0 00:09:45.484 }, 00:09:45.484 "claimed": true, 00:09:45.484 "claim_type": "exclusive_write", 00:09:45.484 "zoned": false, 00:09:45.484 "supported_io_types": { 00:09:45.484 "read": true, 00:09:45.484 "write": true, 00:09:45.484 "unmap": true, 00:09:45.484 "flush": true, 00:09:45.484 "reset": true, 00:09:45.484 "nvme_admin": false, 00:09:45.484 "nvme_io": false, 00:09:45.484 "nvme_io_md": false, 00:09:45.484 "write_zeroes": true, 00:09:45.484 "zcopy": true, 00:09:45.484 "get_zone_info": false, 00:09:45.484 "zone_management": false, 00:09:45.484 "zone_append": false, 00:09:45.484 "compare": false, 00:09:45.484 "compare_and_write": false, 00:09:45.484 "abort": true, 00:09:45.484 "seek_hole": false, 00:09:45.484 "seek_data": false, 00:09:45.484 "copy": true, 00:09:45.484 "nvme_iov_md": false 00:09:45.484 }, 00:09:45.484 "memory_domains": [ 00:09:45.484 { 00:09:45.484 "dma_device_id": "system", 00:09:45.484 "dma_device_type": 1 00:09:45.484 }, 00:09:45.484 { 00:09:45.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.484 "dma_device_type": 2 00:09:45.484 } 00:09:45.484 ], 00:09:45.484 "driver_specific": {} 00:09:45.484 } 00:09:45.484 ] 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.484 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.485 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.744 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.744 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.744 "name": "Existed_Raid", 00:09:45.744 "uuid": "acb3de85-f0fb-42f6-9b89-a2f59810ff41", 00:09:45.744 "strip_size_kb": 64, 00:09:45.744 "state": "online", 00:09:45.744 "raid_level": "concat", 00:09:45.744 "superblock": true, 00:09:45.744 "num_base_bdevs": 2, 00:09:45.744 "num_base_bdevs_discovered": 2, 00:09:45.744 "num_base_bdevs_operational": 2, 00:09:45.744 "base_bdevs_list": [ 00:09:45.744 { 00:09:45.744 "name": "BaseBdev1", 00:09:45.744 "uuid": "d428d333-1d99-4473-aac4-bfa6b0df048e", 00:09:45.744 "is_configured": true, 00:09:45.744 "data_offset": 2048, 00:09:45.744 "data_size": 63488 00:09:45.744 }, 00:09:45.744 { 00:09:45.744 "name": "BaseBdev2", 00:09:45.744 "uuid": "3076bb50-f1ad-4d0a-b0bd-47f7c5200bc1", 00:09:45.744 "is_configured": true, 00:09:45.744 "data_offset": 2048, 00:09:45.744 "data_size": 63488 00:09:45.744 } 00:09:45.745 ] 00:09:45.745 }' 00:09:45.745 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.745 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.004 [2024-09-27 22:26:41.821413] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.004 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.004 "name": "Existed_Raid", 00:09:46.004 "aliases": [ 00:09:46.004 "acb3de85-f0fb-42f6-9b89-a2f59810ff41" 00:09:46.004 ], 00:09:46.004 "product_name": "Raid Volume", 00:09:46.004 "block_size": 512, 00:09:46.004 "num_blocks": 126976, 00:09:46.004 "uuid": "acb3de85-f0fb-42f6-9b89-a2f59810ff41", 00:09:46.005 "assigned_rate_limits": { 00:09:46.005 "rw_ios_per_sec": 0, 00:09:46.005 "rw_mbytes_per_sec": 0, 00:09:46.005 "r_mbytes_per_sec": 0, 00:09:46.005 "w_mbytes_per_sec": 0 00:09:46.005 }, 00:09:46.005 "claimed": false, 00:09:46.005 "zoned": false, 00:09:46.005 "supported_io_types": { 00:09:46.005 "read": true, 00:09:46.005 "write": true, 00:09:46.005 "unmap": true, 00:09:46.005 "flush": true, 00:09:46.005 "reset": true, 00:09:46.005 "nvme_admin": false, 00:09:46.005 "nvme_io": false, 00:09:46.005 "nvme_io_md": false, 00:09:46.005 "write_zeroes": true, 00:09:46.005 "zcopy": false, 00:09:46.005 "get_zone_info": false, 00:09:46.005 "zone_management": false, 00:09:46.005 "zone_append": false, 00:09:46.005 "compare": false, 00:09:46.005 "compare_and_write": false, 00:09:46.005 "abort": false, 00:09:46.005 "seek_hole": false, 00:09:46.005 "seek_data": false, 00:09:46.005 "copy": false, 00:09:46.005 "nvme_iov_md": false 00:09:46.005 }, 00:09:46.005 "memory_domains": [ 00:09:46.005 { 00:09:46.005 "dma_device_id": "system", 00:09:46.005 "dma_device_type": 1 00:09:46.005 }, 00:09:46.005 { 00:09:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.005 "dma_device_type": 2 00:09:46.005 }, 00:09:46.005 { 00:09:46.005 "dma_device_id": "system", 00:09:46.005 "dma_device_type": 1 00:09:46.005 }, 00:09:46.005 { 00:09:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.005 "dma_device_type": 2 00:09:46.005 } 00:09:46.005 ], 00:09:46.005 "driver_specific": { 00:09:46.005 "raid": { 00:09:46.005 "uuid": "acb3de85-f0fb-42f6-9b89-a2f59810ff41", 00:09:46.005 "strip_size_kb": 64, 00:09:46.005 "state": "online", 00:09:46.005 "raid_level": "concat", 00:09:46.005 "superblock": true, 00:09:46.005 "num_base_bdevs": 2, 00:09:46.005 "num_base_bdevs_discovered": 2, 00:09:46.005 "num_base_bdevs_operational": 2, 00:09:46.005 "base_bdevs_list": [ 00:09:46.005 { 00:09:46.005 "name": "BaseBdev1", 00:09:46.005 "uuid": "d428d333-1d99-4473-aac4-bfa6b0df048e", 00:09:46.005 "is_configured": true, 00:09:46.005 "data_offset": 2048, 00:09:46.005 "data_size": 63488 00:09:46.005 }, 00:09:46.005 { 00:09:46.005 "name": "BaseBdev2", 00:09:46.005 "uuid": "3076bb50-f1ad-4d0a-b0bd-47f7c5200bc1", 00:09:46.005 "is_configured": true, 00:09:46.005 "data_offset": 2048, 00:09:46.005 "data_size": 63488 00:09:46.005 } 00:09:46.005 ] 00:09:46.005 } 00:09:46.005 } 00:09:46.005 }' 00:09:46.005 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.264 BaseBdev2' 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.264 22:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.264 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.264 [2024-09-27 22:26:42.060856] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.264 [2024-09-27 22:26:42.061024] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.264 [2024-09-27 22:26:42.061186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.524 "name": "Existed_Raid", 00:09:46.524 "uuid": "acb3de85-f0fb-42f6-9b89-a2f59810ff41", 00:09:46.524 "strip_size_kb": 64, 00:09:46.524 "state": "offline", 00:09:46.524 "raid_level": "concat", 00:09:46.524 "superblock": true, 00:09:46.524 "num_base_bdevs": 2, 00:09:46.524 "num_base_bdevs_discovered": 1, 00:09:46.524 "num_base_bdevs_operational": 1, 00:09:46.524 "base_bdevs_list": [ 00:09:46.524 { 00:09:46.524 "name": null, 00:09:46.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.524 "is_configured": false, 00:09:46.524 "data_offset": 0, 00:09:46.524 "data_size": 63488 00:09:46.524 }, 00:09:46.524 { 00:09:46.524 "name": "BaseBdev2", 00:09:46.524 "uuid": "3076bb50-f1ad-4d0a-b0bd-47f7c5200bc1", 00:09:46.524 "is_configured": true, 00:09:46.524 "data_offset": 2048, 00:09:46.524 "data_size": 63488 00:09:46.524 } 00:09:46.524 ] 00:09:46.524 }' 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.524 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.785 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.785 [2024-09-27 22:26:42.656069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.785 [2024-09-27 22:26:42.656254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62292 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62292 ']' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62292 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62292 00:09:47.046 killing process with pid 62292 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62292' 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62292 00:09:47.046 [2024-09-27 22:26:42.862016] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.046 22:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62292 00:09:47.046 [2024-09-27 22:26:42.881228] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.645 ************************************ 00:09:49.645 END TEST raid_state_function_test_sb 00:09:49.645 ************************************ 00:09:49.645 22:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.645 00:09:49.645 real 0m6.429s 00:09:49.645 user 0m8.610s 00:09:49.645 sys 0m1.003s 00:09:49.645 22:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.645 22:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.645 22:26:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:49.645 22:26:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:49.645 22:26:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.645 22:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.645 ************************************ 00:09:49.645 START TEST raid_superblock_test 00:09:49.645 ************************************ 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62561 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62561 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62561 ']' 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.645 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.646 22:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.646 [2024-09-27 22:26:45.153357] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:49.646 [2024-09-27 22:26:45.153503] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62561 ] 00:09:49.646 [2024-09-27 22:26:45.319602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.904 [2024-09-27 22:26:45.575337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.162 [2024-09-27 22:26:45.833024] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.162 [2024-09-27 22:26:45.833057] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.731 malloc1 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.731 [2024-09-27 22:26:46.399784] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.731 [2024-09-27 22:26:46.399855] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.731 [2024-09-27 22:26:46.399881] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:50.731 [2024-09-27 22:26:46.399898] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.731 [2024-09-27 22:26:46.402619] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.731 [2024-09-27 22:26:46.402766] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.731 pt1 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.731 malloc2 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.731 [2024-09-27 22:26:46.465905] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.731 [2024-09-27 22:26:46.465984] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.731 [2024-09-27 22:26:46.466027] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:50.731 [2024-09-27 22:26:46.466040] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.731 [2024-09-27 22:26:46.468618] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.731 pt2 00:09:50.731 [2024-09-27 22:26:46.469082] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:50.731 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.732 [2024-09-27 22:26:46.477960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.732 [2024-09-27 22:26:46.480354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.732 [2024-09-27 22:26:46.480664] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:50.732 [2024-09-27 22:26:46.480766] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:50.732 [2024-09-27 22:26:46.481114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:50.732 [2024-09-27 22:26:46.481317] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:50.732 [2024-09-27 22:26:46.481362] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:50.732 [2024-09-27 22:26:46.481635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.732 "name": "raid_bdev1", 00:09:50.732 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:50.732 "strip_size_kb": 64, 00:09:50.732 "state": "online", 00:09:50.732 "raid_level": "concat", 00:09:50.732 "superblock": true, 00:09:50.732 "num_base_bdevs": 2, 00:09:50.732 "num_base_bdevs_discovered": 2, 00:09:50.732 "num_base_bdevs_operational": 2, 00:09:50.732 "base_bdevs_list": [ 00:09:50.732 { 00:09:50.732 "name": "pt1", 00:09:50.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.732 "is_configured": true, 00:09:50.732 "data_offset": 2048, 00:09:50.732 "data_size": 63488 00:09:50.732 }, 00:09:50.732 { 00:09:50.732 "name": "pt2", 00:09:50.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.732 "is_configured": true, 00:09:50.732 "data_offset": 2048, 00:09:50.732 "data_size": 63488 00:09:50.732 } 00:09:50.732 ] 00:09:50.732 }' 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.732 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.302 [2024-09-27 22:26:46.921607] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.302 "name": "raid_bdev1", 00:09:51.302 "aliases": [ 00:09:51.302 "bea06b84-5dbc-4b44-9970-5d5b29637caf" 00:09:51.302 ], 00:09:51.302 "product_name": "Raid Volume", 00:09:51.302 "block_size": 512, 00:09:51.302 "num_blocks": 126976, 00:09:51.302 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:51.302 "assigned_rate_limits": { 00:09:51.302 "rw_ios_per_sec": 0, 00:09:51.302 "rw_mbytes_per_sec": 0, 00:09:51.302 "r_mbytes_per_sec": 0, 00:09:51.302 "w_mbytes_per_sec": 0 00:09:51.302 }, 00:09:51.302 "claimed": false, 00:09:51.302 "zoned": false, 00:09:51.302 "supported_io_types": { 00:09:51.302 "read": true, 00:09:51.302 "write": true, 00:09:51.302 "unmap": true, 00:09:51.302 "flush": true, 00:09:51.302 "reset": true, 00:09:51.302 "nvme_admin": false, 00:09:51.302 "nvme_io": false, 00:09:51.302 "nvme_io_md": false, 00:09:51.302 "write_zeroes": true, 00:09:51.302 "zcopy": false, 00:09:51.302 "get_zone_info": false, 00:09:51.302 "zone_management": false, 00:09:51.302 "zone_append": false, 00:09:51.302 "compare": false, 00:09:51.302 "compare_and_write": false, 00:09:51.302 "abort": false, 00:09:51.302 "seek_hole": false, 00:09:51.302 "seek_data": false, 00:09:51.302 "copy": false, 00:09:51.302 "nvme_iov_md": false 00:09:51.302 }, 00:09:51.302 "memory_domains": [ 00:09:51.302 { 00:09:51.302 "dma_device_id": "system", 00:09:51.302 "dma_device_type": 1 00:09:51.302 }, 00:09:51.302 { 00:09:51.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.302 "dma_device_type": 2 00:09:51.302 }, 00:09:51.302 { 00:09:51.302 "dma_device_id": "system", 00:09:51.302 "dma_device_type": 1 00:09:51.302 }, 00:09:51.302 { 00:09:51.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.302 "dma_device_type": 2 00:09:51.302 } 00:09:51.302 ], 00:09:51.302 "driver_specific": { 00:09:51.302 "raid": { 00:09:51.302 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:51.302 "strip_size_kb": 64, 00:09:51.302 "state": "online", 00:09:51.302 "raid_level": "concat", 00:09:51.302 "superblock": true, 00:09:51.302 "num_base_bdevs": 2, 00:09:51.302 "num_base_bdevs_discovered": 2, 00:09:51.302 "num_base_bdevs_operational": 2, 00:09:51.302 "base_bdevs_list": [ 00:09:51.302 { 00:09:51.302 "name": "pt1", 00:09:51.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.302 "is_configured": true, 00:09:51.302 "data_offset": 2048, 00:09:51.302 "data_size": 63488 00:09:51.302 }, 00:09:51.302 { 00:09:51.302 "name": "pt2", 00:09:51.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.302 "is_configured": true, 00:09:51.302 "data_offset": 2048, 00:09:51.302 "data_size": 63488 00:09:51.302 } 00:09:51.302 ] 00:09:51.302 } 00:09:51.302 } 00:09:51.302 }' 00:09:51.302 22:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.302 pt2' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:51.302 [2024-09-27 22:26:47.153356] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.302 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bea06b84-5dbc-4b44-9970-5d5b29637caf 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bea06b84-5dbc-4b44-9970-5d5b29637caf ']' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 [2024-09-27 22:26:47.201031] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.562 [2024-09-27 22:26:47.201173] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.562 [2024-09-27 22:26:47.201351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.562 [2024-09-27 22:26:47.201433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.562 [2024-09-27 22:26:47.201646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 [2024-09-27 22:26:47.328850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:51.562 [2024-09-27 22:26:47.331239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:51.562 [2024-09-27 22:26:47.331440] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:51.562 [2024-09-27 22:26:47.331505] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:51.562 [2024-09-27 22:26:47.331525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.562 [2024-09-27 22:26:47.331538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:51.562 request: 00:09:51.562 { 00:09:51.562 "name": "raid_bdev1", 00:09:51.562 "raid_level": "concat", 00:09:51.562 "base_bdevs": [ 00:09:51.562 "malloc1", 00:09:51.562 "malloc2" 00:09:51.562 ], 00:09:51.562 "strip_size_kb": 64, 00:09:51.562 "superblock": false, 00:09:51.562 "method": "bdev_raid_create", 00:09:51.562 "req_id": 1 00:09:51.562 } 00:09:51.562 Got JSON-RPC error response 00:09:51.562 response: 00:09:51.562 { 00:09:51.562 "code": -17, 00:09:51.562 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:51.562 } 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.562 [2024-09-27 22:26:47.392739] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.562 [2024-09-27 22:26:47.392801] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.562 [2024-09-27 22:26:47.392837] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:51.562 [2024-09-27 22:26:47.392853] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.562 [2024-09-27 22:26:47.395551] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.562 [2024-09-27 22:26:47.395597] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.562 [2024-09-27 22:26:47.395684] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.562 [2024-09-27 22:26:47.395747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.562 pt1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.562 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.563 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.563 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.563 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.563 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.563 "name": "raid_bdev1", 00:09:51.563 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:51.563 "strip_size_kb": 64, 00:09:51.563 "state": "configuring", 00:09:51.563 "raid_level": "concat", 00:09:51.563 "superblock": true, 00:09:51.563 "num_base_bdevs": 2, 00:09:51.563 "num_base_bdevs_discovered": 1, 00:09:51.563 "num_base_bdevs_operational": 2, 00:09:51.563 "base_bdevs_list": [ 00:09:51.563 { 00:09:51.563 "name": "pt1", 00:09:51.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.563 "is_configured": true, 00:09:51.563 "data_offset": 2048, 00:09:51.563 "data_size": 63488 00:09:51.563 }, 00:09:51.563 { 00:09:51.563 "name": null, 00:09:51.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.563 "is_configured": false, 00:09:51.563 "data_offset": 2048, 00:09:51.563 "data_size": 63488 00:09:51.563 } 00:09:51.563 ] 00:09:51.563 }' 00:09:51.563 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.563 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.130 [2024-09-27 22:26:47.808205] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.130 [2024-09-27 22:26:47.808409] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.130 [2024-09-27 22:26:47.808440] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:52.130 [2024-09-27 22:26:47.808456] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.130 [2024-09-27 22:26:47.808962] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.130 [2024-09-27 22:26:47.809000] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.130 [2024-09-27 22:26:47.809090] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.130 [2024-09-27 22:26:47.809117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.130 [2024-09-27 22:26:47.809234] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.130 [2024-09-27 22:26:47.809247] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:52.130 [2024-09-27 22:26:47.809497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.130 [2024-09-27 22:26:47.809637] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.130 [2024-09-27 22:26:47.809654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:52.130 [2024-09-27 22:26:47.809792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.130 pt2 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.130 "name": "raid_bdev1", 00:09:52.130 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:52.130 "strip_size_kb": 64, 00:09:52.130 "state": "online", 00:09:52.130 "raid_level": "concat", 00:09:52.130 "superblock": true, 00:09:52.130 "num_base_bdevs": 2, 00:09:52.130 "num_base_bdevs_discovered": 2, 00:09:52.130 "num_base_bdevs_operational": 2, 00:09:52.130 "base_bdevs_list": [ 00:09:52.130 { 00:09:52.130 "name": "pt1", 00:09:52.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.130 "is_configured": true, 00:09:52.130 "data_offset": 2048, 00:09:52.130 "data_size": 63488 00:09:52.130 }, 00:09:52.130 { 00:09:52.130 "name": "pt2", 00:09:52.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.130 "is_configured": true, 00:09:52.130 "data_offset": 2048, 00:09:52.130 "data_size": 63488 00:09:52.130 } 00:09:52.130 ] 00:09:52.130 }' 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.130 22:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.388 [2024-09-27 22:26:48.231854] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.388 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.647 "name": "raid_bdev1", 00:09:52.647 "aliases": [ 00:09:52.647 "bea06b84-5dbc-4b44-9970-5d5b29637caf" 00:09:52.647 ], 00:09:52.647 "product_name": "Raid Volume", 00:09:52.647 "block_size": 512, 00:09:52.647 "num_blocks": 126976, 00:09:52.647 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:52.647 "assigned_rate_limits": { 00:09:52.647 "rw_ios_per_sec": 0, 00:09:52.647 "rw_mbytes_per_sec": 0, 00:09:52.647 "r_mbytes_per_sec": 0, 00:09:52.647 "w_mbytes_per_sec": 0 00:09:52.647 }, 00:09:52.647 "claimed": false, 00:09:52.647 "zoned": false, 00:09:52.647 "supported_io_types": { 00:09:52.647 "read": true, 00:09:52.647 "write": true, 00:09:52.647 "unmap": true, 00:09:52.647 "flush": true, 00:09:52.647 "reset": true, 00:09:52.647 "nvme_admin": false, 00:09:52.647 "nvme_io": false, 00:09:52.647 "nvme_io_md": false, 00:09:52.647 "write_zeroes": true, 00:09:52.647 "zcopy": false, 00:09:52.647 "get_zone_info": false, 00:09:52.647 "zone_management": false, 00:09:52.647 "zone_append": false, 00:09:52.647 "compare": false, 00:09:52.647 "compare_and_write": false, 00:09:52.647 "abort": false, 00:09:52.647 "seek_hole": false, 00:09:52.647 "seek_data": false, 00:09:52.647 "copy": false, 00:09:52.647 "nvme_iov_md": false 00:09:52.647 }, 00:09:52.647 "memory_domains": [ 00:09:52.647 { 00:09:52.647 "dma_device_id": "system", 00:09:52.647 "dma_device_type": 1 00:09:52.647 }, 00:09:52.647 { 00:09:52.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.647 "dma_device_type": 2 00:09:52.647 }, 00:09:52.647 { 00:09:52.647 "dma_device_id": "system", 00:09:52.647 "dma_device_type": 1 00:09:52.647 }, 00:09:52.647 { 00:09:52.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.647 "dma_device_type": 2 00:09:52.647 } 00:09:52.647 ], 00:09:52.647 "driver_specific": { 00:09:52.647 "raid": { 00:09:52.647 "uuid": "bea06b84-5dbc-4b44-9970-5d5b29637caf", 00:09:52.647 "strip_size_kb": 64, 00:09:52.647 "state": "online", 00:09:52.647 "raid_level": "concat", 00:09:52.647 "superblock": true, 00:09:52.647 "num_base_bdevs": 2, 00:09:52.647 "num_base_bdevs_discovered": 2, 00:09:52.647 "num_base_bdevs_operational": 2, 00:09:52.647 "base_bdevs_list": [ 00:09:52.647 { 00:09:52.647 "name": "pt1", 00:09:52.647 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.647 "is_configured": true, 00:09:52.647 "data_offset": 2048, 00:09:52.647 "data_size": 63488 00:09:52.647 }, 00:09:52.647 { 00:09:52.647 "name": "pt2", 00:09:52.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.647 "is_configured": true, 00:09:52.647 "data_offset": 2048, 00:09:52.647 "data_size": 63488 00:09:52.647 } 00:09:52.647 ] 00:09:52.647 } 00:09:52.647 } 00:09:52.647 }' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:52.647 pt2' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.647 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:52.648 [2024-09-27 22:26:48.443533] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bea06b84-5dbc-4b44-9970-5d5b29637caf '!=' bea06b84-5dbc-4b44-9970-5d5b29637caf ']' 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62561 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62561 ']' 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62561 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.648 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62561 00:09:52.907 killing process with pid 62561 00:09:52.907 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.907 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.907 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62561' 00:09:52.907 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62561 00:09:52.907 [2024-09-27 22:26:48.534364] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.907 [2024-09-27 22:26:48.534461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.907 22:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62561 00:09:52.907 [2024-09-27 22:26:48.534512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.907 [2024-09-27 22:26:48.534526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:52.907 [2024-09-27 22:26:48.768505] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.457 22:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:55.457 ************************************ 00:09:55.457 END TEST raid_superblock_test 00:09:55.457 ************************************ 00:09:55.457 00:09:55.457 real 0m5.841s 00:09:55.457 user 0m7.584s 00:09:55.457 sys 0m0.900s 00:09:55.458 22:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.458 22:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.458 22:26:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:55.458 22:26:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:55.458 22:26:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.458 22:26:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.458 ************************************ 00:09:55.458 START TEST raid_read_error_test 00:09:55.458 ************************************ 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p4ltRdyo3e 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62778 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62778 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62778 ']' 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.458 22:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.458 [2024-09-27 22:26:51.088894] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:09:55.458 [2024-09-27 22:26:51.091107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62778 ] 00:09:55.458 [2024-09-27 22:26:51.277220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.717 [2024-09-27 22:26:51.526252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.976 [2024-09-27 22:26:51.787468] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.976 [2024-09-27 22:26:51.787514] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 BaseBdev1_malloc 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 true 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.542 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 [2024-09-27 22:26:52.358130] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:56.542 [2024-09-27 22:26:52.358321] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.542 [2024-09-27 22:26:52.358351] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:56.542 [2024-09-27 22:26:52.358367] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.542 [2024-09-27 22:26:52.360947] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.542 BaseBdev1 00:09:56.543 [2024-09-27 22:26:52.361141] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.543 BaseBdev2_malloc 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.543 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 true 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 [2024-09-27 22:26:52.434633] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:56.802 [2024-09-27 22:26:52.434832] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.802 [2024-09-27 22:26:52.434889] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:56.802 [2024-09-27 22:26:52.434985] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.802 [2024-09-27 22:26:52.437627] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.802 [2024-09-27 22:26:52.437781] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:56.802 BaseBdev2 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.802 [2024-09-27 22:26:52.446779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.802 [2024-09-27 22:26:52.449189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.802 [2024-09-27 22:26:52.449529] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.802 [2024-09-27 22:26:52.449637] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:56.802 [2024-09-27 22:26:52.449958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:56.802 [2024-09-27 22:26:52.450210] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.802 [2024-09-27 22:26:52.450256] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:56.802 [2024-09-27 22:26:52.450559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.802 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.803 "name": "raid_bdev1", 00:09:56.803 "uuid": "a5ab4233-aef8-4f12-8233-cd8effa331c9", 00:09:56.803 "strip_size_kb": 64, 00:09:56.803 "state": "online", 00:09:56.803 "raid_level": "concat", 00:09:56.803 "superblock": true, 00:09:56.803 "num_base_bdevs": 2, 00:09:56.803 "num_base_bdevs_discovered": 2, 00:09:56.803 "num_base_bdevs_operational": 2, 00:09:56.803 "base_bdevs_list": [ 00:09:56.803 { 00:09:56.803 "name": "BaseBdev1", 00:09:56.803 "uuid": "b8e21c0c-e9b9-54ba-86d2-dc6b19ef1765", 00:09:56.803 "is_configured": true, 00:09:56.803 "data_offset": 2048, 00:09:56.803 "data_size": 63488 00:09:56.803 }, 00:09:56.803 { 00:09:56.803 "name": "BaseBdev2", 00:09:56.803 "uuid": "bf77b600-21bd-5594-bf86-278cb3edce5f", 00:09:56.803 "is_configured": true, 00:09:56.803 "data_offset": 2048, 00:09:56.803 "data_size": 63488 00:09:56.803 } 00:09:56.803 ] 00:09:56.803 }' 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.803 22:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.062 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.062 22:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.322 [2024-09-27 22:26:52.967751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.260 "name": "raid_bdev1", 00:09:58.260 "uuid": "a5ab4233-aef8-4f12-8233-cd8effa331c9", 00:09:58.260 "strip_size_kb": 64, 00:09:58.260 "state": "online", 00:09:58.260 "raid_level": "concat", 00:09:58.260 "superblock": true, 00:09:58.260 "num_base_bdevs": 2, 00:09:58.260 "num_base_bdevs_discovered": 2, 00:09:58.260 "num_base_bdevs_operational": 2, 00:09:58.260 "base_bdevs_list": [ 00:09:58.260 { 00:09:58.260 "name": "BaseBdev1", 00:09:58.260 "uuid": "b8e21c0c-e9b9-54ba-86d2-dc6b19ef1765", 00:09:58.260 "is_configured": true, 00:09:58.260 "data_offset": 2048, 00:09:58.260 "data_size": 63488 00:09:58.260 }, 00:09:58.260 { 00:09:58.260 "name": "BaseBdev2", 00:09:58.260 "uuid": "bf77b600-21bd-5594-bf86-278cb3edce5f", 00:09:58.260 "is_configured": true, 00:09:58.260 "data_offset": 2048, 00:09:58.260 "data_size": 63488 00:09:58.260 } 00:09:58.260 ] 00:09:58.260 }' 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.260 22:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.518 22:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.518 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.518 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.518 [2024-09-27 22:26:54.349292] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.518 [2024-09-27 22:26:54.349330] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.518 [2024-09-27 22:26:54.352130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.518 [2024-09-27 22:26:54.352195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.518 [2024-09-27 22:26:54.352225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.518 [2024-09-27 22:26:54.352239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:58.518 { 00:09:58.518 "results": [ 00:09:58.518 { 00:09:58.518 "job": "raid_bdev1", 00:09:58.518 "core_mask": "0x1", 00:09:58.518 "workload": "randrw", 00:09:58.518 "percentage": 50, 00:09:58.518 "status": "finished", 00:09:58.518 "queue_depth": 1, 00:09:58.518 "io_size": 131072, 00:09:58.518 "runtime": 1.381507, 00:09:58.518 "iops": 15053.850613858634, 00:09:58.518 "mibps": 1881.7313267323293, 00:09:58.518 "io_failed": 1, 00:09:58.518 "io_timeout": 0, 00:09:58.518 "avg_latency_us": 91.25017535282007, 00:09:58.518 "min_latency_us": 28.170281124497993, 00:09:58.518 "max_latency_us": 1559.4409638554216 00:09:58.518 } 00:09:58.519 ], 00:09:58.519 "core_count": 1 00:09:58.519 } 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62778 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62778 ']' 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62778 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.519 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62778 00:09:58.777 killing process with pid 62778 00:09:58.777 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.777 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.777 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62778' 00:09:58.777 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62778 00:09:58.777 [2024-09-27 22:26:54.399592] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.777 22:26:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62778 00:09:58.777 [2024-09-27 22:26:54.549217] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p4ltRdyo3e 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:01.312 00:10:01.312 real 0m5.729s 00:10:01.312 user 0m6.509s 00:10:01.312 sys 0m0.699s 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.312 ************************************ 00:10:01.312 END TEST raid_read_error_test 00:10:01.312 ************************************ 00:10:01.312 22:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.312 22:26:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:01.312 22:26:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:01.312 22:26:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.312 22:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.312 ************************************ 00:10:01.312 START TEST raid_write_error_test 00:10:01.312 ************************************ 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u1lhoHPOFu 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62939 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62939 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62939 ']' 00:10:01.312 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.313 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.313 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.313 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.313 22:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.313 [2024-09-27 22:26:56.889592] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:01.313 [2024-09-27 22:26:56.890488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62939 ] 00:10:01.313 [2024-09-27 22:26:57.066246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.572 [2024-09-27 22:26:57.318032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.831 [2024-09-27 22:26:57.576217] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.831 [2024-09-27 22:26:57.576263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 BaseBdev1_malloc 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 true 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 [2024-09-27 22:26:58.159588] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:02.396 [2024-09-27 22:26:58.159785] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.396 [2024-09-27 22:26:58.159817] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:02.396 [2024-09-27 22:26:58.159833] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.396 [2024-09-27 22:26:58.162455] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.396 [2024-09-27 22:26:58.162503] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:02.396 BaseBdev1 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 BaseBdev2_malloc 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 true 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 [2024-09-27 22:26:58.237306] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:02.396 [2024-09-27 22:26:58.237489] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.396 [2024-09-27 22:26:58.237519] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:02.396 [2024-09-27 22:26:58.237537] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.396 [2024-09-27 22:26:58.240178] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.396 [2024-09-27 22:26:58.240333] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:02.396 BaseBdev2 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.396 [2024-09-27 22:26:58.249388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.396 [2024-09-27 22:26:58.251777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.396 [2024-09-27 22:26:58.252146] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.396 [2024-09-27 22:26:58.252259] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:02.396 [2024-09-27 22:26:58.252573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:02.396 [2024-09-27 22:26:58.252750] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.396 [2024-09-27 22:26:58.252763] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:02.396 [2024-09-27 22:26:58.252945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.396 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.654 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.654 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.654 "name": "raid_bdev1", 00:10:02.654 "uuid": "e856eb9d-b63f-40f4-b9a8-0697347ec06f", 00:10:02.654 "strip_size_kb": 64, 00:10:02.654 "state": "online", 00:10:02.654 "raid_level": "concat", 00:10:02.654 "superblock": true, 00:10:02.654 "num_base_bdevs": 2, 00:10:02.654 "num_base_bdevs_discovered": 2, 00:10:02.654 "num_base_bdevs_operational": 2, 00:10:02.654 "base_bdevs_list": [ 00:10:02.654 { 00:10:02.654 "name": "BaseBdev1", 00:10:02.654 "uuid": "b4f001e1-2cbd-5179-a1ea-6e585ed47ba2", 00:10:02.654 "is_configured": true, 00:10:02.654 "data_offset": 2048, 00:10:02.654 "data_size": 63488 00:10:02.654 }, 00:10:02.654 { 00:10:02.654 "name": "BaseBdev2", 00:10:02.654 "uuid": "ffa565ba-2a8d-58d7-98fc-f9d1db68ee52", 00:10:02.654 "is_configured": true, 00:10:02.654 "data_offset": 2048, 00:10:02.654 "data_size": 63488 00:10:02.654 } 00:10:02.654 ] 00:10:02.654 }' 00:10:02.654 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.654 22:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.911 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:02.911 22:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:03.168 [2024-09-27 22:26:58.822123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.101 "name": "raid_bdev1", 00:10:04.101 "uuid": "e856eb9d-b63f-40f4-b9a8-0697347ec06f", 00:10:04.101 "strip_size_kb": 64, 00:10:04.101 "state": "online", 00:10:04.101 "raid_level": "concat", 00:10:04.101 "superblock": true, 00:10:04.101 "num_base_bdevs": 2, 00:10:04.101 "num_base_bdevs_discovered": 2, 00:10:04.101 "num_base_bdevs_operational": 2, 00:10:04.101 "base_bdevs_list": [ 00:10:04.101 { 00:10:04.101 "name": "BaseBdev1", 00:10:04.101 "uuid": "b4f001e1-2cbd-5179-a1ea-6e585ed47ba2", 00:10:04.101 "is_configured": true, 00:10:04.101 "data_offset": 2048, 00:10:04.101 "data_size": 63488 00:10:04.101 }, 00:10:04.101 { 00:10:04.101 "name": "BaseBdev2", 00:10:04.101 "uuid": "ffa565ba-2a8d-58d7-98fc-f9d1db68ee52", 00:10:04.101 "is_configured": true, 00:10:04.101 "data_offset": 2048, 00:10:04.101 "data_size": 63488 00:10:04.101 } 00:10:04.101 ] 00:10:04.101 }' 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.101 22:26:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.359 22:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.359 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.359 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.359 [2024-09-27 22:27:00.154652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.359 [2024-09-27 22:27:00.154695] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.359 { 00:10:04.359 "results": [ 00:10:04.360 { 00:10:04.360 "job": "raid_bdev1", 00:10:04.360 "core_mask": "0x1", 00:10:04.360 "workload": "randrw", 00:10:04.360 "percentage": 50, 00:10:04.360 "status": "finished", 00:10:04.360 "queue_depth": 1, 00:10:04.360 "io_size": 131072, 00:10:04.360 "runtime": 1.332452, 00:10:04.360 "iops": 14956.636336618505, 00:10:04.360 "mibps": 1869.5795420773131, 00:10:04.360 "io_failed": 1, 00:10:04.360 "io_timeout": 0, 00:10:04.360 "avg_latency_us": 91.80933782294254, 00:10:04.360 "min_latency_us": 29.815261044176708, 00:10:04.360 "max_latency_us": 1546.2811244979919 00:10:04.360 } 00:10:04.360 ], 00:10:04.360 "core_count": 1 00:10:04.360 } 00:10:04.360 [2024-09-27 22:27:00.157558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.360 [2024-09-27 22:27:00.157607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.360 [2024-09-27 22:27:00.157640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.360 [2024-09-27 22:27:00.157656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62939 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62939 ']' 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62939 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62939 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62939' 00:10:04.360 killing process with pid 62939 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62939 00:10:04.360 [2024-09-27 22:27:00.206752] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.360 22:27:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62939 00:10:04.618 [2024-09-27 22:27:00.357697] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u1lhoHPOFu 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:07.145 00:10:07.145 real 0m5.814s 00:10:07.145 user 0m6.607s 00:10:07.145 sys 0m0.684s 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.145 ************************************ 00:10:07.145 END TEST raid_write_error_test 00:10:07.145 ************************************ 00:10:07.145 22:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.145 22:27:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:07.145 22:27:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:07.145 22:27:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:07.145 22:27:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.145 22:27:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.145 ************************************ 00:10:07.145 START TEST raid_state_function_test 00:10:07.145 ************************************ 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63101 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63101' 00:10:07.145 Process raid pid: 63101 00:10:07.145 22:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63101 00:10:07.146 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63101 ']' 00:10:07.146 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.146 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.146 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.146 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.146 22:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.146 [2024-09-27 22:27:02.767700] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:07.146 [2024-09-27 22:27:02.767853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.146 [2024-09-27 22:27:02.929579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.415 [2024-09-27 22:27:03.188862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.688 [2024-09-27 22:27:03.450561] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.688 [2024-09-27 22:27:03.450599] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 [2024-09-27 22:27:04.019219] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.254 [2024-09-27 22:27:04.019284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.254 [2024-09-27 22:27:04.019296] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.254 [2024-09-27 22:27:04.019309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.254 "name": "Existed_Raid", 00:10:08.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.254 "strip_size_kb": 0, 00:10:08.254 "state": "configuring", 00:10:08.254 "raid_level": "raid1", 00:10:08.254 "superblock": false, 00:10:08.254 "num_base_bdevs": 2, 00:10:08.254 "num_base_bdevs_discovered": 0, 00:10:08.254 "num_base_bdevs_operational": 2, 00:10:08.254 "base_bdevs_list": [ 00:10:08.254 { 00:10:08.254 "name": "BaseBdev1", 00:10:08.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.254 "is_configured": false, 00:10:08.254 "data_offset": 0, 00:10:08.254 "data_size": 0 00:10:08.254 }, 00:10:08.254 { 00:10:08.254 "name": "BaseBdev2", 00:10:08.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.254 "is_configured": false, 00:10:08.254 "data_offset": 0, 00:10:08.254 "data_size": 0 00:10:08.254 } 00:10:08.254 ] 00:10:08.254 }' 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.254 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [2024-09-27 22:27:04.471174] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.821 [2024-09-27 22:27:04.471217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [2024-09-27 22:27:04.479177] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.821 [2024-09-27 22:27:04.479231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.821 [2024-09-27 22:27:04.479242] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.821 [2024-09-27 22:27:04.479259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [2024-09-27 22:27:04.532852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.821 BaseBdev1 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 [ 00:10:08.821 { 00:10:08.821 "name": "BaseBdev1", 00:10:08.821 "aliases": [ 00:10:08.821 "4236cb11-7037-4cc0-aa68-05da2a89f80b" 00:10:08.821 ], 00:10:08.821 "product_name": "Malloc disk", 00:10:08.821 "block_size": 512, 00:10:08.821 "num_blocks": 65536, 00:10:08.821 "uuid": "4236cb11-7037-4cc0-aa68-05da2a89f80b", 00:10:08.821 "assigned_rate_limits": { 00:10:08.821 "rw_ios_per_sec": 0, 00:10:08.821 "rw_mbytes_per_sec": 0, 00:10:08.821 "r_mbytes_per_sec": 0, 00:10:08.821 "w_mbytes_per_sec": 0 00:10:08.821 }, 00:10:08.821 "claimed": true, 00:10:08.821 "claim_type": "exclusive_write", 00:10:08.821 "zoned": false, 00:10:08.821 "supported_io_types": { 00:10:08.821 "read": true, 00:10:08.821 "write": true, 00:10:08.821 "unmap": true, 00:10:08.821 "flush": true, 00:10:08.821 "reset": true, 00:10:08.821 "nvme_admin": false, 00:10:08.821 "nvme_io": false, 00:10:08.821 "nvme_io_md": false, 00:10:08.821 "write_zeroes": true, 00:10:08.821 "zcopy": true, 00:10:08.821 "get_zone_info": false, 00:10:08.821 "zone_management": false, 00:10:08.821 "zone_append": false, 00:10:08.821 "compare": false, 00:10:08.821 "compare_and_write": false, 00:10:08.821 "abort": true, 00:10:08.821 "seek_hole": false, 00:10:08.821 "seek_data": false, 00:10:08.821 "copy": true, 00:10:08.821 "nvme_iov_md": false 00:10:08.821 }, 00:10:08.821 "memory_domains": [ 00:10:08.821 { 00:10:08.821 "dma_device_id": "system", 00:10:08.821 "dma_device_type": 1 00:10:08.821 }, 00:10:08.821 { 00:10:08.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.821 "dma_device_type": 2 00:10:08.821 } 00:10:08.821 ], 00:10:08.821 "driver_specific": {} 00:10:08.821 } 00:10:08.821 ] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.821 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.821 "name": "Existed_Raid", 00:10:08.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.821 "strip_size_kb": 0, 00:10:08.821 "state": "configuring", 00:10:08.821 "raid_level": "raid1", 00:10:08.821 "superblock": false, 00:10:08.821 "num_base_bdevs": 2, 00:10:08.821 "num_base_bdevs_discovered": 1, 00:10:08.821 "num_base_bdevs_operational": 2, 00:10:08.821 "base_bdevs_list": [ 00:10:08.821 { 00:10:08.821 "name": "BaseBdev1", 00:10:08.821 "uuid": "4236cb11-7037-4cc0-aa68-05da2a89f80b", 00:10:08.821 "is_configured": true, 00:10:08.821 "data_offset": 0, 00:10:08.822 "data_size": 65536 00:10:08.822 }, 00:10:08.822 { 00:10:08.822 "name": "BaseBdev2", 00:10:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.822 "is_configured": false, 00:10:08.822 "data_offset": 0, 00:10:08.822 "data_size": 0 00:10:08.822 } 00:10:08.822 ] 00:10:08.822 }' 00:10:08.822 22:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.822 22:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.387 [2024-09-27 22:27:05.036206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.387 [2024-09-27 22:27:05.036421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.387 [2024-09-27 22:27:05.044239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.387 [2024-09-27 22:27:05.046463] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.387 [2024-09-27 22:27:05.046520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.387 "name": "Existed_Raid", 00:10:09.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.387 "strip_size_kb": 0, 00:10:09.387 "state": "configuring", 00:10:09.387 "raid_level": "raid1", 00:10:09.387 "superblock": false, 00:10:09.387 "num_base_bdevs": 2, 00:10:09.387 "num_base_bdevs_discovered": 1, 00:10:09.387 "num_base_bdevs_operational": 2, 00:10:09.387 "base_bdevs_list": [ 00:10:09.387 { 00:10:09.387 "name": "BaseBdev1", 00:10:09.387 "uuid": "4236cb11-7037-4cc0-aa68-05da2a89f80b", 00:10:09.387 "is_configured": true, 00:10:09.387 "data_offset": 0, 00:10:09.387 "data_size": 65536 00:10:09.387 }, 00:10:09.387 { 00:10:09.387 "name": "BaseBdev2", 00:10:09.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.387 "is_configured": false, 00:10:09.387 "data_offset": 0, 00:10:09.387 "data_size": 0 00:10:09.387 } 00:10:09.387 ] 00:10:09.387 }' 00:10:09.387 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.388 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.645 [2024-09-27 22:27:05.506933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.645 [2024-09-27 22:27:05.507008] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.645 [2024-09-27 22:27:05.507038] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:09.645 [2024-09-27 22:27:05.507361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:09.645 [2024-09-27 22:27:05.507552] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.645 [2024-09-27 22:27:05.507579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:09.645 [2024-09-27 22:27:05.507891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.645 BaseBdev2 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.645 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.905 [ 00:10:09.905 { 00:10:09.905 "name": "BaseBdev2", 00:10:09.905 "aliases": [ 00:10:09.905 "883db6dc-8caf-4867-8ab3-09ff57792440" 00:10:09.905 ], 00:10:09.905 "product_name": "Malloc disk", 00:10:09.905 "block_size": 512, 00:10:09.905 "num_blocks": 65536, 00:10:09.905 "uuid": "883db6dc-8caf-4867-8ab3-09ff57792440", 00:10:09.905 "assigned_rate_limits": { 00:10:09.905 "rw_ios_per_sec": 0, 00:10:09.905 "rw_mbytes_per_sec": 0, 00:10:09.905 "r_mbytes_per_sec": 0, 00:10:09.905 "w_mbytes_per_sec": 0 00:10:09.905 }, 00:10:09.905 "claimed": true, 00:10:09.905 "claim_type": "exclusive_write", 00:10:09.905 "zoned": false, 00:10:09.905 "supported_io_types": { 00:10:09.905 "read": true, 00:10:09.905 "write": true, 00:10:09.905 "unmap": true, 00:10:09.905 "flush": true, 00:10:09.905 "reset": true, 00:10:09.905 "nvme_admin": false, 00:10:09.905 "nvme_io": false, 00:10:09.905 "nvme_io_md": false, 00:10:09.905 "write_zeroes": true, 00:10:09.905 "zcopy": true, 00:10:09.905 "get_zone_info": false, 00:10:09.905 "zone_management": false, 00:10:09.905 "zone_append": false, 00:10:09.905 "compare": false, 00:10:09.905 "compare_and_write": false, 00:10:09.905 "abort": true, 00:10:09.905 "seek_hole": false, 00:10:09.905 "seek_data": false, 00:10:09.905 "copy": true, 00:10:09.905 "nvme_iov_md": false 00:10:09.905 }, 00:10:09.905 "memory_domains": [ 00:10:09.905 { 00:10:09.905 "dma_device_id": "system", 00:10:09.905 "dma_device_type": 1 00:10:09.905 }, 00:10:09.905 { 00:10:09.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.905 "dma_device_type": 2 00:10:09.905 } 00:10:09.905 ], 00:10:09.905 "driver_specific": {} 00:10:09.905 } 00:10:09.905 ] 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.905 "name": "Existed_Raid", 00:10:09.905 "uuid": "09e1e15c-93d1-4306-8f10-a0861de53509", 00:10:09.905 "strip_size_kb": 0, 00:10:09.905 "state": "online", 00:10:09.905 "raid_level": "raid1", 00:10:09.905 "superblock": false, 00:10:09.905 "num_base_bdevs": 2, 00:10:09.905 "num_base_bdevs_discovered": 2, 00:10:09.905 "num_base_bdevs_operational": 2, 00:10:09.905 "base_bdevs_list": [ 00:10:09.905 { 00:10:09.905 "name": "BaseBdev1", 00:10:09.905 "uuid": "4236cb11-7037-4cc0-aa68-05da2a89f80b", 00:10:09.905 "is_configured": true, 00:10:09.905 "data_offset": 0, 00:10:09.905 "data_size": 65536 00:10:09.905 }, 00:10:09.905 { 00:10:09.905 "name": "BaseBdev2", 00:10:09.905 "uuid": "883db6dc-8caf-4867-8ab3-09ff57792440", 00:10:09.905 "is_configured": true, 00:10:09.905 "data_offset": 0, 00:10:09.905 "data_size": 65536 00:10:09.905 } 00:10:09.905 ] 00:10:09.905 }' 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.905 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.164 22:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.164 [2024-09-27 22:27:06.002612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.164 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.164 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.164 "name": "Existed_Raid", 00:10:10.164 "aliases": [ 00:10:10.164 "09e1e15c-93d1-4306-8f10-a0861de53509" 00:10:10.164 ], 00:10:10.164 "product_name": "Raid Volume", 00:10:10.164 "block_size": 512, 00:10:10.164 "num_blocks": 65536, 00:10:10.164 "uuid": "09e1e15c-93d1-4306-8f10-a0861de53509", 00:10:10.164 "assigned_rate_limits": { 00:10:10.164 "rw_ios_per_sec": 0, 00:10:10.164 "rw_mbytes_per_sec": 0, 00:10:10.164 "r_mbytes_per_sec": 0, 00:10:10.164 "w_mbytes_per_sec": 0 00:10:10.164 }, 00:10:10.164 "claimed": false, 00:10:10.164 "zoned": false, 00:10:10.164 "supported_io_types": { 00:10:10.164 "read": true, 00:10:10.164 "write": true, 00:10:10.164 "unmap": false, 00:10:10.164 "flush": false, 00:10:10.164 "reset": true, 00:10:10.164 "nvme_admin": false, 00:10:10.164 "nvme_io": false, 00:10:10.164 "nvme_io_md": false, 00:10:10.164 "write_zeroes": true, 00:10:10.164 "zcopy": false, 00:10:10.164 "get_zone_info": false, 00:10:10.164 "zone_management": false, 00:10:10.164 "zone_append": false, 00:10:10.164 "compare": false, 00:10:10.164 "compare_and_write": false, 00:10:10.164 "abort": false, 00:10:10.164 "seek_hole": false, 00:10:10.164 "seek_data": false, 00:10:10.164 "copy": false, 00:10:10.164 "nvme_iov_md": false 00:10:10.164 }, 00:10:10.164 "memory_domains": [ 00:10:10.164 { 00:10:10.164 "dma_device_id": "system", 00:10:10.164 "dma_device_type": 1 00:10:10.164 }, 00:10:10.164 { 00:10:10.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.164 "dma_device_type": 2 00:10:10.164 }, 00:10:10.164 { 00:10:10.164 "dma_device_id": "system", 00:10:10.164 "dma_device_type": 1 00:10:10.164 }, 00:10:10.164 { 00:10:10.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.164 "dma_device_type": 2 00:10:10.164 } 00:10:10.164 ], 00:10:10.164 "driver_specific": { 00:10:10.164 "raid": { 00:10:10.164 "uuid": "09e1e15c-93d1-4306-8f10-a0861de53509", 00:10:10.164 "strip_size_kb": 0, 00:10:10.164 "state": "online", 00:10:10.164 "raid_level": "raid1", 00:10:10.164 "superblock": false, 00:10:10.164 "num_base_bdevs": 2, 00:10:10.164 "num_base_bdevs_discovered": 2, 00:10:10.164 "num_base_bdevs_operational": 2, 00:10:10.164 "base_bdevs_list": [ 00:10:10.164 { 00:10:10.164 "name": "BaseBdev1", 00:10:10.164 "uuid": "4236cb11-7037-4cc0-aa68-05da2a89f80b", 00:10:10.164 "is_configured": true, 00:10:10.164 "data_offset": 0, 00:10:10.164 "data_size": 65536 00:10:10.164 }, 00:10:10.164 { 00:10:10.164 "name": "BaseBdev2", 00:10:10.165 "uuid": "883db6dc-8caf-4867-8ab3-09ff57792440", 00:10:10.165 "is_configured": true, 00:10:10.165 "data_offset": 0, 00:10:10.165 "data_size": 65536 00:10:10.165 } 00:10:10.165 ] 00:10:10.165 } 00:10:10.165 } 00:10:10.165 }' 00:10:10.165 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:10.422 BaseBdev2' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.422 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.422 [2024-09-27 22:27:06.222072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.680 "name": "Existed_Raid", 00:10:10.680 "uuid": "09e1e15c-93d1-4306-8f10-a0861de53509", 00:10:10.680 "strip_size_kb": 0, 00:10:10.680 "state": "online", 00:10:10.680 "raid_level": "raid1", 00:10:10.680 "superblock": false, 00:10:10.680 "num_base_bdevs": 2, 00:10:10.680 "num_base_bdevs_discovered": 1, 00:10:10.680 "num_base_bdevs_operational": 1, 00:10:10.680 "base_bdevs_list": [ 00:10:10.680 { 00:10:10.680 "name": null, 00:10:10.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.680 "is_configured": false, 00:10:10.680 "data_offset": 0, 00:10:10.680 "data_size": 65536 00:10:10.680 }, 00:10:10.680 { 00:10:10.680 "name": "BaseBdev2", 00:10:10.680 "uuid": "883db6dc-8caf-4867-8ab3-09ff57792440", 00:10:10.680 "is_configured": true, 00:10:10.680 "data_offset": 0, 00:10:10.680 "data_size": 65536 00:10:10.680 } 00:10:10.680 ] 00:10:10.680 }' 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.680 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.939 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.939 [2024-09-27 22:27:06.808297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.939 [2024-09-27 22:27:06.808554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.198 [2024-09-27 22:27:06.915609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.198 [2024-09-27 22:27:06.915668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.198 [2024-09-27 22:27:06.915684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63101 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63101 ']' 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63101 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.198 22:27:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63101 00:10:11.198 killing process with pid 63101 00:10:11.198 22:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.198 22:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.198 22:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63101' 00:10:11.198 22:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63101 00:10:11.198 [2024-09-27 22:27:07.017735] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.198 22:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63101 00:10:11.198 [2024-09-27 22:27:07.036944] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:13.739 00:10:13.739 real 0m6.561s 00:10:13.739 user 0m8.696s 00:10:13.739 sys 0m1.054s 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.739 ************************************ 00:10:13.739 END TEST raid_state_function_test 00:10:13.739 ************************************ 00:10:13.739 22:27:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:13.739 22:27:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.739 22:27:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.739 22:27:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.739 ************************************ 00:10:13.739 START TEST raid_state_function_test_sb 00:10:13.739 ************************************ 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63365 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63365' 00:10:13.739 Process raid pid: 63365 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63365 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63365 ']' 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.739 22:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.739 [2024-09-27 22:27:09.404111] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:13.739 [2024-09-27 22:27:09.404287] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.739 [2024-09-27 22:27:09.569965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.999 [2024-09-27 22:27:09.823565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.258 [2024-09-27 22:27:10.087337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.258 [2024-09-27 22:27:10.087379] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.828 [2024-09-27 22:27:10.604675] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.828 [2024-09-27 22:27:10.604743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.828 [2024-09-27 22:27:10.604756] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.828 [2024-09-27 22:27:10.604771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.828 "name": "Existed_Raid", 00:10:14.828 "uuid": "67dadb69-bdc2-444b-9bee-147962c079bb", 00:10:14.828 "strip_size_kb": 0, 00:10:14.828 "state": "configuring", 00:10:14.828 "raid_level": "raid1", 00:10:14.828 "superblock": true, 00:10:14.828 "num_base_bdevs": 2, 00:10:14.828 "num_base_bdevs_discovered": 0, 00:10:14.828 "num_base_bdevs_operational": 2, 00:10:14.828 "base_bdevs_list": [ 00:10:14.828 { 00:10:14.828 "name": "BaseBdev1", 00:10:14.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.828 "is_configured": false, 00:10:14.828 "data_offset": 0, 00:10:14.828 "data_size": 0 00:10:14.828 }, 00:10:14.828 { 00:10:14.828 "name": "BaseBdev2", 00:10:14.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.828 "is_configured": false, 00:10:14.828 "data_offset": 0, 00:10:14.828 "data_size": 0 00:10:14.828 } 00:10:14.828 ] 00:10:14.828 }' 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.828 22:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.396 [2024-09-27 22:27:11.063998] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.396 [2024-09-27 22:27:11.064195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.396 [2024-09-27 22:27:11.075994] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.396 [2024-09-27 22:27:11.076179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.396 [2024-09-27 22:27:11.076274] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.396 [2024-09-27 22:27:11.076326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.396 [2024-09-27 22:27:11.134558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.396 BaseBdev1 00:10:15.396 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.397 [ 00:10:15.397 { 00:10:15.397 "name": "BaseBdev1", 00:10:15.397 "aliases": [ 00:10:15.397 "a42d0d03-a9da-489d-9444-02cc3027d010" 00:10:15.397 ], 00:10:15.397 "product_name": "Malloc disk", 00:10:15.397 "block_size": 512, 00:10:15.397 "num_blocks": 65536, 00:10:15.397 "uuid": "a42d0d03-a9da-489d-9444-02cc3027d010", 00:10:15.397 "assigned_rate_limits": { 00:10:15.397 "rw_ios_per_sec": 0, 00:10:15.397 "rw_mbytes_per_sec": 0, 00:10:15.397 "r_mbytes_per_sec": 0, 00:10:15.397 "w_mbytes_per_sec": 0 00:10:15.397 }, 00:10:15.397 "claimed": true, 00:10:15.397 "claim_type": "exclusive_write", 00:10:15.397 "zoned": false, 00:10:15.397 "supported_io_types": { 00:10:15.397 "read": true, 00:10:15.397 "write": true, 00:10:15.397 "unmap": true, 00:10:15.397 "flush": true, 00:10:15.397 "reset": true, 00:10:15.397 "nvme_admin": false, 00:10:15.397 "nvme_io": false, 00:10:15.397 "nvme_io_md": false, 00:10:15.397 "write_zeroes": true, 00:10:15.397 "zcopy": true, 00:10:15.397 "get_zone_info": false, 00:10:15.397 "zone_management": false, 00:10:15.397 "zone_append": false, 00:10:15.397 "compare": false, 00:10:15.397 "compare_and_write": false, 00:10:15.397 "abort": true, 00:10:15.397 "seek_hole": false, 00:10:15.397 "seek_data": false, 00:10:15.397 "copy": true, 00:10:15.397 "nvme_iov_md": false 00:10:15.397 }, 00:10:15.397 "memory_domains": [ 00:10:15.397 { 00:10:15.397 "dma_device_id": "system", 00:10:15.397 "dma_device_type": 1 00:10:15.397 }, 00:10:15.397 { 00:10:15.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.397 "dma_device_type": 2 00:10:15.397 } 00:10:15.397 ], 00:10:15.397 "driver_specific": {} 00:10:15.397 } 00:10:15.397 ] 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.397 "name": "Existed_Raid", 00:10:15.397 "uuid": "39a81c5c-bc99-4854-ae2c-b1770356c16c", 00:10:15.397 "strip_size_kb": 0, 00:10:15.397 "state": "configuring", 00:10:15.397 "raid_level": "raid1", 00:10:15.397 "superblock": true, 00:10:15.397 "num_base_bdevs": 2, 00:10:15.397 "num_base_bdevs_discovered": 1, 00:10:15.397 "num_base_bdevs_operational": 2, 00:10:15.397 "base_bdevs_list": [ 00:10:15.397 { 00:10:15.397 "name": "BaseBdev1", 00:10:15.397 "uuid": "a42d0d03-a9da-489d-9444-02cc3027d010", 00:10:15.397 "is_configured": true, 00:10:15.397 "data_offset": 2048, 00:10:15.397 "data_size": 63488 00:10:15.397 }, 00:10:15.397 { 00:10:15.397 "name": "BaseBdev2", 00:10:15.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.397 "is_configured": false, 00:10:15.397 "data_offset": 0, 00:10:15.397 "data_size": 0 00:10:15.397 } 00:10:15.397 ] 00:10:15.397 }' 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.397 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.974 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.974 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.974 [2024-09-27 22:27:11.654155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.974 [2024-09-27 22:27:11.654221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.975 [2024-09-27 22:27:11.666206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.975 [2024-09-27 22:27:11.668533] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.975 [2024-09-27 22:27:11.668597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.975 "name": "Existed_Raid", 00:10:15.975 "uuid": "2a669d53-8870-4803-adf9-af1fdb34083a", 00:10:15.975 "strip_size_kb": 0, 00:10:15.975 "state": "configuring", 00:10:15.975 "raid_level": "raid1", 00:10:15.975 "superblock": true, 00:10:15.975 "num_base_bdevs": 2, 00:10:15.975 "num_base_bdevs_discovered": 1, 00:10:15.975 "num_base_bdevs_operational": 2, 00:10:15.975 "base_bdevs_list": [ 00:10:15.975 { 00:10:15.975 "name": "BaseBdev1", 00:10:15.975 "uuid": "a42d0d03-a9da-489d-9444-02cc3027d010", 00:10:15.975 "is_configured": true, 00:10:15.975 "data_offset": 2048, 00:10:15.975 "data_size": 63488 00:10:15.975 }, 00:10:15.975 { 00:10:15.975 "name": "BaseBdev2", 00:10:15.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.975 "is_configured": false, 00:10:15.975 "data_offset": 0, 00:10:15.975 "data_size": 0 00:10:15.975 } 00:10:15.975 ] 00:10:15.975 }' 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.975 22:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 [2024-09-27 22:27:12.190374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.558 [2024-09-27 22:27:12.190914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.558 [2024-09-27 22:27:12.190941] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.558 BaseBdev2 00:10:16.558 [2024-09-27 22:27:12.191291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.558 [2024-09-27 22:27:12.191474] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.558 [2024-09-27 22:27:12.191490] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.558 [2024-09-27 22:27:12.191639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 [ 00:10:16.558 { 00:10:16.558 "name": "BaseBdev2", 00:10:16.558 "aliases": [ 00:10:16.558 "dce12919-4c65-497e-943b-f0a51ff2493d" 00:10:16.558 ], 00:10:16.558 "product_name": "Malloc disk", 00:10:16.558 "block_size": 512, 00:10:16.558 "num_blocks": 65536, 00:10:16.558 "uuid": "dce12919-4c65-497e-943b-f0a51ff2493d", 00:10:16.558 "assigned_rate_limits": { 00:10:16.558 "rw_ios_per_sec": 0, 00:10:16.558 "rw_mbytes_per_sec": 0, 00:10:16.558 "r_mbytes_per_sec": 0, 00:10:16.558 "w_mbytes_per_sec": 0 00:10:16.558 }, 00:10:16.558 "claimed": true, 00:10:16.558 "claim_type": "exclusive_write", 00:10:16.558 "zoned": false, 00:10:16.558 "supported_io_types": { 00:10:16.558 "read": true, 00:10:16.558 "write": true, 00:10:16.558 "unmap": true, 00:10:16.558 "flush": true, 00:10:16.558 "reset": true, 00:10:16.558 "nvme_admin": false, 00:10:16.558 "nvme_io": false, 00:10:16.558 "nvme_io_md": false, 00:10:16.558 "write_zeroes": true, 00:10:16.558 "zcopy": true, 00:10:16.558 "get_zone_info": false, 00:10:16.558 "zone_management": false, 00:10:16.558 "zone_append": false, 00:10:16.558 "compare": false, 00:10:16.558 "compare_and_write": false, 00:10:16.558 "abort": true, 00:10:16.558 "seek_hole": false, 00:10:16.558 "seek_data": false, 00:10:16.558 "copy": true, 00:10:16.558 "nvme_iov_md": false 00:10:16.558 }, 00:10:16.558 "memory_domains": [ 00:10:16.558 { 00:10:16.558 "dma_device_id": "system", 00:10:16.558 "dma_device_type": 1 00:10:16.558 }, 00:10:16.558 { 00:10:16.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.558 "dma_device_type": 2 00:10:16.558 } 00:10:16.558 ], 00:10:16.558 "driver_specific": {} 00:10:16.558 } 00:10:16.558 ] 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.558 "name": "Existed_Raid", 00:10:16.558 "uuid": "2a669d53-8870-4803-adf9-af1fdb34083a", 00:10:16.558 "strip_size_kb": 0, 00:10:16.558 "state": "online", 00:10:16.558 "raid_level": "raid1", 00:10:16.558 "superblock": true, 00:10:16.558 "num_base_bdevs": 2, 00:10:16.558 "num_base_bdevs_discovered": 2, 00:10:16.558 "num_base_bdevs_operational": 2, 00:10:16.558 "base_bdevs_list": [ 00:10:16.558 { 00:10:16.558 "name": "BaseBdev1", 00:10:16.558 "uuid": "a42d0d03-a9da-489d-9444-02cc3027d010", 00:10:16.558 "is_configured": true, 00:10:16.558 "data_offset": 2048, 00:10:16.558 "data_size": 63488 00:10:16.558 }, 00:10:16.558 { 00:10:16.558 "name": "BaseBdev2", 00:10:16.558 "uuid": "dce12919-4c65-497e-943b-f0a51ff2493d", 00:10:16.558 "is_configured": true, 00:10:16.558 "data_offset": 2048, 00:10:16.558 "data_size": 63488 00:10:16.558 } 00:10:16.558 ] 00:10:16.558 }' 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.558 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.821 [2024-09-27 22:27:12.662185] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.821 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.085 "name": "Existed_Raid", 00:10:17.085 "aliases": [ 00:10:17.085 "2a669d53-8870-4803-adf9-af1fdb34083a" 00:10:17.085 ], 00:10:17.085 "product_name": "Raid Volume", 00:10:17.085 "block_size": 512, 00:10:17.085 "num_blocks": 63488, 00:10:17.085 "uuid": "2a669d53-8870-4803-adf9-af1fdb34083a", 00:10:17.085 "assigned_rate_limits": { 00:10:17.085 "rw_ios_per_sec": 0, 00:10:17.085 "rw_mbytes_per_sec": 0, 00:10:17.085 "r_mbytes_per_sec": 0, 00:10:17.085 "w_mbytes_per_sec": 0 00:10:17.085 }, 00:10:17.085 "claimed": false, 00:10:17.085 "zoned": false, 00:10:17.085 "supported_io_types": { 00:10:17.085 "read": true, 00:10:17.085 "write": true, 00:10:17.085 "unmap": false, 00:10:17.085 "flush": false, 00:10:17.085 "reset": true, 00:10:17.085 "nvme_admin": false, 00:10:17.085 "nvme_io": false, 00:10:17.085 "nvme_io_md": false, 00:10:17.085 "write_zeroes": true, 00:10:17.085 "zcopy": false, 00:10:17.085 "get_zone_info": false, 00:10:17.085 "zone_management": false, 00:10:17.085 "zone_append": false, 00:10:17.085 "compare": false, 00:10:17.085 "compare_and_write": false, 00:10:17.085 "abort": false, 00:10:17.085 "seek_hole": false, 00:10:17.085 "seek_data": false, 00:10:17.085 "copy": false, 00:10:17.085 "nvme_iov_md": false 00:10:17.085 }, 00:10:17.085 "memory_domains": [ 00:10:17.085 { 00:10:17.085 "dma_device_id": "system", 00:10:17.085 "dma_device_type": 1 00:10:17.085 }, 00:10:17.085 { 00:10:17.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.085 "dma_device_type": 2 00:10:17.085 }, 00:10:17.085 { 00:10:17.085 "dma_device_id": "system", 00:10:17.085 "dma_device_type": 1 00:10:17.085 }, 00:10:17.085 { 00:10:17.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.085 "dma_device_type": 2 00:10:17.085 } 00:10:17.085 ], 00:10:17.085 "driver_specific": { 00:10:17.085 "raid": { 00:10:17.085 "uuid": "2a669d53-8870-4803-adf9-af1fdb34083a", 00:10:17.085 "strip_size_kb": 0, 00:10:17.085 "state": "online", 00:10:17.085 "raid_level": "raid1", 00:10:17.085 "superblock": true, 00:10:17.085 "num_base_bdevs": 2, 00:10:17.085 "num_base_bdevs_discovered": 2, 00:10:17.085 "num_base_bdevs_operational": 2, 00:10:17.085 "base_bdevs_list": [ 00:10:17.085 { 00:10:17.085 "name": "BaseBdev1", 00:10:17.085 "uuid": "a42d0d03-a9da-489d-9444-02cc3027d010", 00:10:17.085 "is_configured": true, 00:10:17.085 "data_offset": 2048, 00:10:17.085 "data_size": 63488 00:10:17.085 }, 00:10:17.085 { 00:10:17.085 "name": "BaseBdev2", 00:10:17.085 "uuid": "dce12919-4c65-497e-943b-f0a51ff2493d", 00:10:17.085 "is_configured": true, 00:10:17.085 "data_offset": 2048, 00:10:17.085 "data_size": 63488 00:10:17.085 } 00:10:17.085 ] 00:10:17.085 } 00:10:17.085 } 00:10:17.085 }' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.085 BaseBdev2' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.085 22:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.085 [2024-09-27 22:27:12.901613] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.351 "name": "Existed_Raid", 00:10:17.351 "uuid": "2a669d53-8870-4803-adf9-af1fdb34083a", 00:10:17.351 "strip_size_kb": 0, 00:10:17.351 "state": "online", 00:10:17.351 "raid_level": "raid1", 00:10:17.351 "superblock": true, 00:10:17.351 "num_base_bdevs": 2, 00:10:17.351 "num_base_bdevs_discovered": 1, 00:10:17.351 "num_base_bdevs_operational": 1, 00:10:17.351 "base_bdevs_list": [ 00:10:17.351 { 00:10:17.351 "name": null, 00:10:17.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.351 "is_configured": false, 00:10:17.351 "data_offset": 0, 00:10:17.351 "data_size": 63488 00:10:17.351 }, 00:10:17.351 { 00:10:17.351 "name": "BaseBdev2", 00:10:17.351 "uuid": "dce12919-4c65-497e-943b-f0a51ff2493d", 00:10:17.351 "is_configured": true, 00:10:17.351 "data_offset": 2048, 00:10:17.351 "data_size": 63488 00:10:17.351 } 00:10:17.351 ] 00:10:17.351 }' 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.351 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.612 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.612 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.612 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.612 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.612 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.612 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.871 [2024-09-27 22:27:13.514193] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.871 [2024-09-27 22:27:13.514321] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.871 [2024-09-27 22:27:13.623790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.871 [2024-09-27 22:27:13.623854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.871 [2024-09-27 22:27:13.623871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63365 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63365 ']' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63365 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63365 00:10:17.871 killing process with pid 63365 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63365' 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63365 00:10:17.871 [2024-09-27 22:27:13.723908] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.871 22:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63365 00:10:17.871 [2024-09-27 22:27:13.744017] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.481 ************************************ 00:10:20.481 END TEST raid_state_function_test_sb 00:10:20.482 ************************************ 00:10:20.482 22:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:20.482 00:10:20.482 real 0m6.598s 00:10:20.482 user 0m8.845s 00:10:20.482 sys 0m0.999s 00:10:20.482 22:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.482 22:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.482 22:27:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:20.482 22:27:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:20.482 22:27:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.482 22:27:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.482 ************************************ 00:10:20.482 START TEST raid_superblock_test 00:10:20.482 ************************************ 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63634 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63634 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63634 ']' 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.482 22:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.482 [2024-09-27 22:27:16.096062] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:20.482 [2024-09-27 22:27:16.096480] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63634 ] 00:10:20.482 [2024-09-27 22:27:16.291075] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.741 [2024-09-27 22:27:16.545587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.036 [2024-09-27 22:27:16.806287] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.036 [2024-09-27 22:27:16.806337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.604 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.604 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:21.604 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:21.604 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:21.604 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:21.604 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 malloc1 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 [2024-09-27 22:27:17.381735] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.605 [2024-09-27 22:27:17.382038] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.605 [2024-09-27 22:27:17.382116] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:21.605 [2024-09-27 22:27:17.382232] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.605 [2024-09-27 22:27:17.385148] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.605 [2024-09-27 22:27:17.385341] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.605 pt1 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 malloc2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 [2024-09-27 22:27:17.451321] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.605 [2024-09-27 22:27:17.451415] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.605 [2024-09-27 22:27:17.451454] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:21.605 [2024-09-27 22:27:17.451471] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.605 [2024-09-27 22:27:17.454305] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.605 [2024-09-27 22:27:17.454367] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.605 pt2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 [2024-09-27 22:27:17.463466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.605 [2024-09-27 22:27:17.466082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.605 [2024-09-27 22:27:17.466308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:21.605 [2024-09-27 22:27:17.466325] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.605 [2024-09-27 22:27:17.466660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:21.605 [2024-09-27 22:27:17.466856] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:21.605 [2024-09-27 22:27:17.466872] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:21.605 [2024-09-27 22:27:17.467126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.864 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.864 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.864 "name": "raid_bdev1", 00:10:21.864 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:21.864 "strip_size_kb": 0, 00:10:21.864 "state": "online", 00:10:21.864 "raid_level": "raid1", 00:10:21.864 "superblock": true, 00:10:21.864 "num_base_bdevs": 2, 00:10:21.864 "num_base_bdevs_discovered": 2, 00:10:21.864 "num_base_bdevs_operational": 2, 00:10:21.864 "base_bdevs_list": [ 00:10:21.864 { 00:10:21.864 "name": "pt1", 00:10:21.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.864 "is_configured": true, 00:10:21.864 "data_offset": 2048, 00:10:21.864 "data_size": 63488 00:10:21.864 }, 00:10:21.864 { 00:10:21.864 "name": "pt2", 00:10:21.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.864 "is_configured": true, 00:10:21.864 "data_offset": 2048, 00:10:21.864 "data_size": 63488 00:10:21.864 } 00:10:21.864 ] 00:10:21.864 }' 00:10:21.864 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.864 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.124 [2024-09-27 22:27:17.927523] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.124 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.124 "name": "raid_bdev1", 00:10:22.124 "aliases": [ 00:10:22.124 "97835087-6496-4f5b-8537-b550ddd470a0" 00:10:22.124 ], 00:10:22.124 "product_name": "Raid Volume", 00:10:22.124 "block_size": 512, 00:10:22.124 "num_blocks": 63488, 00:10:22.124 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:22.124 "assigned_rate_limits": { 00:10:22.124 "rw_ios_per_sec": 0, 00:10:22.124 "rw_mbytes_per_sec": 0, 00:10:22.124 "r_mbytes_per_sec": 0, 00:10:22.124 "w_mbytes_per_sec": 0 00:10:22.124 }, 00:10:22.124 "claimed": false, 00:10:22.124 "zoned": false, 00:10:22.124 "supported_io_types": { 00:10:22.124 "read": true, 00:10:22.124 "write": true, 00:10:22.124 "unmap": false, 00:10:22.124 "flush": false, 00:10:22.124 "reset": true, 00:10:22.124 "nvme_admin": false, 00:10:22.124 "nvme_io": false, 00:10:22.124 "nvme_io_md": false, 00:10:22.124 "write_zeroes": true, 00:10:22.124 "zcopy": false, 00:10:22.124 "get_zone_info": false, 00:10:22.124 "zone_management": false, 00:10:22.124 "zone_append": false, 00:10:22.124 "compare": false, 00:10:22.124 "compare_and_write": false, 00:10:22.124 "abort": false, 00:10:22.124 "seek_hole": false, 00:10:22.124 "seek_data": false, 00:10:22.124 "copy": false, 00:10:22.124 "nvme_iov_md": false 00:10:22.124 }, 00:10:22.124 "memory_domains": [ 00:10:22.124 { 00:10:22.124 "dma_device_id": "system", 00:10:22.124 "dma_device_type": 1 00:10:22.124 }, 00:10:22.124 { 00:10:22.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.124 "dma_device_type": 2 00:10:22.124 }, 00:10:22.124 { 00:10:22.124 "dma_device_id": "system", 00:10:22.124 "dma_device_type": 1 00:10:22.124 }, 00:10:22.124 { 00:10:22.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.124 "dma_device_type": 2 00:10:22.125 } 00:10:22.125 ], 00:10:22.125 "driver_specific": { 00:10:22.125 "raid": { 00:10:22.125 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:22.125 "strip_size_kb": 0, 00:10:22.125 "state": "online", 00:10:22.125 "raid_level": "raid1", 00:10:22.125 "superblock": true, 00:10:22.125 "num_base_bdevs": 2, 00:10:22.125 "num_base_bdevs_discovered": 2, 00:10:22.125 "num_base_bdevs_operational": 2, 00:10:22.125 "base_bdevs_list": [ 00:10:22.125 { 00:10:22.125 "name": "pt1", 00:10:22.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.125 "is_configured": true, 00:10:22.125 "data_offset": 2048, 00:10:22.125 "data_size": 63488 00:10:22.125 }, 00:10:22.125 { 00:10:22.125 "name": "pt2", 00:10:22.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.125 "is_configured": true, 00:10:22.125 "data_offset": 2048, 00:10:22.125 "data_size": 63488 00:10:22.125 } 00:10:22.125 ] 00:10:22.125 } 00:10:22.125 } 00:10:22.125 }' 00:10:22.125 22:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.383 pt2' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:22.383 [2024-09-27 22:27:18.163479] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=97835087-6496-4f5b-8537-b550ddd470a0 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 97835087-6496-4f5b-8537-b550ddd470a0 ']' 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.383 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.383 [2024-09-27 22:27:18.211241] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.384 [2024-09-27 22:27:18.211282] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.384 [2024-09-27 22:27:18.211387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.384 [2024-09-27 22:27:18.211460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.384 [2024-09-27 22:27:18.211478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:22.384 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.384 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.384 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.384 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.384 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:22.384 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.643 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.643 [2024-09-27 22:27:18.339227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:22.643 [2024-09-27 22:27:18.341629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:22.644 [2024-09-27 22:27:18.341720] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:22.644 [2024-09-27 22:27:18.341791] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:22.644 [2024-09-27 22:27:18.341813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.644 [2024-09-27 22:27:18.341829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:22.644 request: 00:10:22.644 { 00:10:22.644 "name": "raid_bdev1", 00:10:22.644 "raid_level": "raid1", 00:10:22.644 "base_bdevs": [ 00:10:22.644 "malloc1", 00:10:22.644 "malloc2" 00:10:22.644 ], 00:10:22.644 "superblock": false, 00:10:22.644 "method": "bdev_raid_create", 00:10:22.644 "req_id": 1 00:10:22.644 } 00:10:22.644 Got JSON-RPC error response 00:10:22.644 response: 00:10:22.644 { 00:10:22.644 "code": -17, 00:10:22.644 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:22.644 } 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.644 [2024-09-27 22:27:18.407233] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.644 [2024-09-27 22:27:18.407325] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.644 [2024-09-27 22:27:18.407353] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:22.644 [2024-09-27 22:27:18.407373] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.644 [2024-09-27 22:27:18.409993] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.644 [2024-09-27 22:27:18.410051] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.644 [2024-09-27 22:27:18.410159] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:22.644 [2024-09-27 22:27:18.410238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.644 pt1 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.644 "name": "raid_bdev1", 00:10:22.644 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:22.644 "strip_size_kb": 0, 00:10:22.644 "state": "configuring", 00:10:22.644 "raid_level": "raid1", 00:10:22.644 "superblock": true, 00:10:22.644 "num_base_bdevs": 2, 00:10:22.644 "num_base_bdevs_discovered": 1, 00:10:22.644 "num_base_bdevs_operational": 2, 00:10:22.644 "base_bdevs_list": [ 00:10:22.644 { 00:10:22.644 "name": "pt1", 00:10:22.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.644 "is_configured": true, 00:10:22.644 "data_offset": 2048, 00:10:22.644 "data_size": 63488 00:10:22.644 }, 00:10:22.644 { 00:10:22.644 "name": null, 00:10:22.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.644 "is_configured": false, 00:10:22.644 "data_offset": 2048, 00:10:22.644 "data_size": 63488 00:10:22.644 } 00:10:22.644 ] 00:10:22.644 }' 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.644 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.216 [2024-09-27 22:27:18.847200] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.216 [2024-09-27 22:27:18.847551] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.216 [2024-09-27 22:27:18.847586] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:23.216 [2024-09-27 22:27:18.847604] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.216 [2024-09-27 22:27:18.848188] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.216 [2024-09-27 22:27:18.848231] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.216 [2024-09-27 22:27:18.848336] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:23.216 [2024-09-27 22:27:18.848367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.216 [2024-09-27 22:27:18.848501] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:23.216 [2024-09-27 22:27:18.848517] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.216 [2024-09-27 22:27:18.848798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:23.216 [2024-09-27 22:27:18.849012] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:23.216 [2024-09-27 22:27:18.849026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:23.216 [2024-09-27 22:27:18.849198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.216 pt2 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.216 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.217 "name": "raid_bdev1", 00:10:23.217 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:23.217 "strip_size_kb": 0, 00:10:23.217 "state": "online", 00:10:23.217 "raid_level": "raid1", 00:10:23.217 "superblock": true, 00:10:23.217 "num_base_bdevs": 2, 00:10:23.217 "num_base_bdevs_discovered": 2, 00:10:23.217 "num_base_bdevs_operational": 2, 00:10:23.217 "base_bdevs_list": [ 00:10:23.217 { 00:10:23.217 "name": "pt1", 00:10:23.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.217 "is_configured": true, 00:10:23.217 "data_offset": 2048, 00:10:23.217 "data_size": 63488 00:10:23.217 }, 00:10:23.217 { 00:10:23.217 "name": "pt2", 00:10:23.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.217 "is_configured": true, 00:10:23.217 "data_offset": 2048, 00:10:23.217 "data_size": 63488 00:10:23.217 } 00:10:23.217 ] 00:10:23.217 }' 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.217 22:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.482 [2024-09-27 22:27:19.295469] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.482 "name": "raid_bdev1", 00:10:23.482 "aliases": [ 00:10:23.482 "97835087-6496-4f5b-8537-b550ddd470a0" 00:10:23.482 ], 00:10:23.482 "product_name": "Raid Volume", 00:10:23.482 "block_size": 512, 00:10:23.482 "num_blocks": 63488, 00:10:23.482 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:23.482 "assigned_rate_limits": { 00:10:23.482 "rw_ios_per_sec": 0, 00:10:23.482 "rw_mbytes_per_sec": 0, 00:10:23.482 "r_mbytes_per_sec": 0, 00:10:23.482 "w_mbytes_per_sec": 0 00:10:23.482 }, 00:10:23.482 "claimed": false, 00:10:23.482 "zoned": false, 00:10:23.482 "supported_io_types": { 00:10:23.482 "read": true, 00:10:23.482 "write": true, 00:10:23.482 "unmap": false, 00:10:23.482 "flush": false, 00:10:23.482 "reset": true, 00:10:23.482 "nvme_admin": false, 00:10:23.482 "nvme_io": false, 00:10:23.482 "nvme_io_md": false, 00:10:23.482 "write_zeroes": true, 00:10:23.482 "zcopy": false, 00:10:23.482 "get_zone_info": false, 00:10:23.482 "zone_management": false, 00:10:23.482 "zone_append": false, 00:10:23.482 "compare": false, 00:10:23.482 "compare_and_write": false, 00:10:23.482 "abort": false, 00:10:23.482 "seek_hole": false, 00:10:23.482 "seek_data": false, 00:10:23.482 "copy": false, 00:10:23.482 "nvme_iov_md": false 00:10:23.482 }, 00:10:23.482 "memory_domains": [ 00:10:23.482 { 00:10:23.482 "dma_device_id": "system", 00:10:23.482 "dma_device_type": 1 00:10:23.482 }, 00:10:23.482 { 00:10:23.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.482 "dma_device_type": 2 00:10:23.482 }, 00:10:23.482 { 00:10:23.482 "dma_device_id": "system", 00:10:23.482 "dma_device_type": 1 00:10:23.482 }, 00:10:23.482 { 00:10:23.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.482 "dma_device_type": 2 00:10:23.482 } 00:10:23.482 ], 00:10:23.482 "driver_specific": { 00:10:23.482 "raid": { 00:10:23.482 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:23.482 "strip_size_kb": 0, 00:10:23.482 "state": "online", 00:10:23.482 "raid_level": "raid1", 00:10:23.482 "superblock": true, 00:10:23.482 "num_base_bdevs": 2, 00:10:23.482 "num_base_bdevs_discovered": 2, 00:10:23.482 "num_base_bdevs_operational": 2, 00:10:23.482 "base_bdevs_list": [ 00:10:23.482 { 00:10:23.482 "name": "pt1", 00:10:23.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.482 "is_configured": true, 00:10:23.482 "data_offset": 2048, 00:10:23.482 "data_size": 63488 00:10:23.482 }, 00:10:23.482 { 00:10:23.482 "name": "pt2", 00:10:23.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.482 "is_configured": true, 00:10:23.482 "data_offset": 2048, 00:10:23.482 "data_size": 63488 00:10:23.482 } 00:10:23.482 ] 00:10:23.482 } 00:10:23.482 } 00:10:23.482 }' 00:10:23.482 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.740 pt2' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.740 [2024-09-27 22:27:19.515492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 97835087-6496-4f5b-8537-b550ddd470a0 '!=' 97835087-6496-4f5b-8537-b550ddd470a0 ']' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.740 [2024-09-27 22:27:19.559391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.740 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.998 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.998 "name": "raid_bdev1", 00:10:23.998 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:23.998 "strip_size_kb": 0, 00:10:23.998 "state": "online", 00:10:23.998 "raid_level": "raid1", 00:10:23.998 "superblock": true, 00:10:23.998 "num_base_bdevs": 2, 00:10:23.998 "num_base_bdevs_discovered": 1, 00:10:23.998 "num_base_bdevs_operational": 1, 00:10:23.998 "base_bdevs_list": [ 00:10:23.998 { 00:10:23.998 "name": null, 00:10:23.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.998 "is_configured": false, 00:10:23.998 "data_offset": 0, 00:10:23.998 "data_size": 63488 00:10:23.998 }, 00:10:23.998 { 00:10:23.998 "name": "pt2", 00:10:23.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.998 "is_configured": true, 00:10:23.998 "data_offset": 2048, 00:10:23.998 "data_size": 63488 00:10:23.998 } 00:10:23.998 ] 00:10:23.998 }' 00:10:23.998 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.998 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.256 [2024-09-27 22:27:19.979229] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.256 [2024-09-27 22:27:19.979277] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.256 [2024-09-27 22:27:19.979385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.256 [2024-09-27 22:27:19.979443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.256 [2024-09-27 22:27:19.979465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.256 22:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.256 [2024-09-27 22:27:20.039280] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.256 [2024-09-27 22:27:20.039379] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.256 [2024-09-27 22:27:20.039403] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.256 [2024-09-27 22:27:20.039421] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.256 [2024-09-27 22:27:20.042233] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.256 [2024-09-27 22:27:20.042445] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.256 [2024-09-27 22:27:20.042579] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.256 [2024-09-27 22:27:20.042636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.256 [2024-09-27 22:27:20.042769] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.256 [2024-09-27 22:27:20.042787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.256 [2024-09-27 22:27:20.043115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.256 [2024-09-27 22:27:20.043301] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.256 [2024-09-27 22:27:20.043313] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:24.256 pt2 00:10:24.256 [2024-09-27 22:27:20.043530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.256 "name": "raid_bdev1", 00:10:24.256 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:24.256 "strip_size_kb": 0, 00:10:24.256 "state": "online", 00:10:24.256 "raid_level": "raid1", 00:10:24.256 "superblock": true, 00:10:24.256 "num_base_bdevs": 2, 00:10:24.256 "num_base_bdevs_discovered": 1, 00:10:24.256 "num_base_bdevs_operational": 1, 00:10:24.256 "base_bdevs_list": [ 00:10:24.256 { 00:10:24.256 "name": null, 00:10:24.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.256 "is_configured": false, 00:10:24.256 "data_offset": 2048, 00:10:24.256 "data_size": 63488 00:10:24.256 }, 00:10:24.256 { 00:10:24.256 "name": "pt2", 00:10:24.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.256 "is_configured": true, 00:10:24.256 "data_offset": 2048, 00:10:24.256 "data_size": 63488 00:10:24.256 } 00:10:24.256 ] 00:10:24.256 }' 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.256 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.823 [2024-09-27 22:27:20.463206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.823 [2024-09-27 22:27:20.463250] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.823 [2024-09-27 22:27:20.463340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.823 [2024-09-27 22:27:20.463398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.823 [2024-09-27 22:27:20.463412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.823 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.824 [2024-09-27 22:27:20.527227] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.824 [2024-09-27 22:27:20.527319] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.824 [2024-09-27 22:27:20.527348] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:24.824 [2024-09-27 22:27:20.527365] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.824 [2024-09-27 22:27:20.530258] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.824 [2024-09-27 22:27:20.530458] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.824 [2024-09-27 22:27:20.530606] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:24.824 [2024-09-27 22:27:20.530670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.824 [2024-09-27 22:27:20.530831] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:24.824 [2024-09-27 22:27:20.530845] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.824 [2024-09-27 22:27:20.530871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:24.824 [2024-09-27 22:27:20.530938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.824 [2024-09-27 22:27:20.531066] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:24.824 [2024-09-27 22:27:20.531079] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.824 [2024-09-27 22:27:20.531370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:24.824 [2024-09-27 22:27:20.531549] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:24.824 [2024-09-27 22:27:20.531566] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:24.824 [2024-09-27 22:27:20.531808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.824 pt1 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.824 "name": "raid_bdev1", 00:10:24.824 "uuid": "97835087-6496-4f5b-8537-b550ddd470a0", 00:10:24.824 "strip_size_kb": 0, 00:10:24.824 "state": "online", 00:10:24.824 "raid_level": "raid1", 00:10:24.824 "superblock": true, 00:10:24.824 "num_base_bdevs": 2, 00:10:24.824 "num_base_bdevs_discovered": 1, 00:10:24.824 "num_base_bdevs_operational": 1, 00:10:24.824 "base_bdevs_list": [ 00:10:24.824 { 00:10:24.824 "name": null, 00:10:24.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.824 "is_configured": false, 00:10:24.824 "data_offset": 2048, 00:10:24.824 "data_size": 63488 00:10:24.824 }, 00:10:24.824 { 00:10:24.824 "name": "pt2", 00:10:24.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.824 "is_configured": true, 00:10:24.824 "data_offset": 2048, 00:10:24.824 "data_size": 63488 00:10:24.824 } 00:10:24.824 ] 00:10:24.824 }' 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.824 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.082 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:25.082 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:25.082 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.082 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.340 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.340 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:25.340 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.340 22:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:25.340 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.340 22:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.340 [2024-09-27 22:27:20.995501] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 97835087-6496-4f5b-8537-b550ddd470a0 '!=' 97835087-6496-4f5b-8537-b550ddd470a0 ']' 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63634 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63634 ']' 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63634 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63634 00:10:25.340 killing process with pid 63634 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63634' 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63634 00:10:25.340 [2024-09-27 22:27:21.076368] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.340 22:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63634 00:10:25.340 [2024-09-27 22:27:21.076522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.340 [2024-09-27 22:27:21.076587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.340 [2024-09-27 22:27:21.076608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:25.599 [2024-09-27 22:27:21.304125] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.139 22:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:28.139 00:10:28.139 real 0m7.463s 00:10:28.139 user 0m10.428s 00:10:28.139 sys 0m1.325s 00:10:28.139 22:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.139 22:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 ************************************ 00:10:28.139 END TEST raid_superblock_test 00:10:28.139 ************************************ 00:10:28.139 22:27:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:28.139 22:27:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:28.139 22:27:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.139 22:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 ************************************ 00:10:28.139 START TEST raid_read_error_test 00:10:28.139 ************************************ 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X9cZfO5LVs 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63975 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63975 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63975 ']' 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.139 22:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.139 [2024-09-27 22:27:23.633402] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:28.139 [2024-09-27 22:27:23.634309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63975 ] 00:10:28.139 [2024-09-27 22:27:23.810202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.398 [2024-09-27 22:27:24.063363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.656 [2024-09-27 22:27:24.325754] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.656 [2024-09-27 22:27:24.325794] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.224 BaseBdev1_malloc 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.224 true 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.224 [2024-09-27 22:27:24.893873] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.224 [2024-09-27 22:27:24.893946] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.224 [2024-09-27 22:27:24.893969] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.224 [2024-09-27 22:27:24.894003] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.224 [2024-09-27 22:27:24.896622] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.224 [2024-09-27 22:27:24.896837] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.224 BaseBdev1 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.224 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 BaseBdev2_malloc 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 true 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 [2024-09-27 22:27:24.972405] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.225 [2024-09-27 22:27:24.972706] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.225 [2024-09-27 22:27:24.972740] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.225 [2024-09-27 22:27:24.972757] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.225 [2024-09-27 22:27:24.975450] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.225 [2024-09-27 22:27:24.975505] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.225 BaseBdev2 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 [2024-09-27 22:27:24.984475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.225 [2024-09-27 22:27:24.986758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.225 [2024-09-27 22:27:24.987185] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.225 [2024-09-27 22:27:24.987213] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.225 [2024-09-27 22:27:24.987514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:29.225 [2024-09-27 22:27:24.987696] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.225 [2024-09-27 22:27:24.987707] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:29.225 [2024-09-27 22:27:24.987900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.225 22:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.225 22:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.225 22:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.225 "name": "raid_bdev1", 00:10:29.225 "uuid": "5708db48-0e07-481c-bd59-bb76b64b80dd", 00:10:29.225 "strip_size_kb": 0, 00:10:29.225 "state": "online", 00:10:29.225 "raid_level": "raid1", 00:10:29.225 "superblock": true, 00:10:29.225 "num_base_bdevs": 2, 00:10:29.225 "num_base_bdevs_discovered": 2, 00:10:29.225 "num_base_bdevs_operational": 2, 00:10:29.225 "base_bdevs_list": [ 00:10:29.225 { 00:10:29.225 "name": "BaseBdev1", 00:10:29.225 "uuid": "125b5c3f-399d-58a2-ba8b-aa7c9344259e", 00:10:29.225 "is_configured": true, 00:10:29.225 "data_offset": 2048, 00:10:29.225 "data_size": 63488 00:10:29.225 }, 00:10:29.225 { 00:10:29.225 "name": "BaseBdev2", 00:10:29.225 "uuid": "c6363e51-daa6-5cde-bece-6b9b627aca6c", 00:10:29.225 "is_configured": true, 00:10:29.225 "data_offset": 2048, 00:10:29.225 "data_size": 63488 00:10:29.225 } 00:10:29.225 ] 00:10:29.225 }' 00:10:29.225 22:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.225 22:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.792 22:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.792 22:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.792 [2024-09-27 22:27:25.529580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.727 "name": "raid_bdev1", 00:10:30.727 "uuid": "5708db48-0e07-481c-bd59-bb76b64b80dd", 00:10:30.727 "strip_size_kb": 0, 00:10:30.727 "state": "online", 00:10:30.727 "raid_level": "raid1", 00:10:30.727 "superblock": true, 00:10:30.727 "num_base_bdevs": 2, 00:10:30.727 "num_base_bdevs_discovered": 2, 00:10:30.727 "num_base_bdevs_operational": 2, 00:10:30.727 "base_bdevs_list": [ 00:10:30.727 { 00:10:30.727 "name": "BaseBdev1", 00:10:30.727 "uuid": "125b5c3f-399d-58a2-ba8b-aa7c9344259e", 00:10:30.727 "is_configured": true, 00:10:30.727 "data_offset": 2048, 00:10:30.727 "data_size": 63488 00:10:30.727 }, 00:10:30.727 { 00:10:30.727 "name": "BaseBdev2", 00:10:30.727 "uuid": "c6363e51-daa6-5cde-bece-6b9b627aca6c", 00:10:30.727 "is_configured": true, 00:10:30.727 "data_offset": 2048, 00:10:30.727 "data_size": 63488 00:10:30.727 } 00:10:30.727 ] 00:10:30.727 }' 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.727 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.987 [2024-09-27 22:27:26.820377] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.987 [2024-09-27 22:27:26.820595] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.987 [2024-09-27 22:27:26.823601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.987 [2024-09-27 22:27:26.823800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.987 [2024-09-27 22:27:26.823900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.987 [2024-09-27 22:27:26.823917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:30.987 { 00:10:30.987 "results": [ 00:10:30.987 { 00:10:30.987 "job": "raid_bdev1", 00:10:30.987 "core_mask": "0x1", 00:10:30.987 "workload": "randrw", 00:10:30.987 "percentage": 50, 00:10:30.987 "status": "finished", 00:10:30.987 "queue_depth": 1, 00:10:30.987 "io_size": 131072, 00:10:30.987 "runtime": 1.290826, 00:10:30.987 "iops": 16532.824718436103, 00:10:30.987 "mibps": 2066.603089804513, 00:10:30.987 "io_failed": 0, 00:10:30.987 "io_timeout": 0, 00:10:30.987 "avg_latency_us": 57.459304779212445, 00:10:30.987 "min_latency_us": 26.216867469879517, 00:10:30.987 "max_latency_us": 1612.0803212851406 00:10:30.987 } 00:10:30.987 ], 00:10:30.987 "core_count": 1 00:10:30.987 } 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63975 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63975 ']' 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63975 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.987 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63975 00:10:31.247 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.247 killing process with pid 63975 00:10:31.247 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.247 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63975' 00:10:31.247 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63975 00:10:31.247 [2024-09-27 22:27:26.878099] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.247 22:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63975 00:10:31.247 [2024-09-27 22:27:27.027491] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X9cZfO5LVs 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:33.784 ************************************ 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.784 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:33.785 22:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:33.785 00:10:33.785 real 0m5.744s 00:10:33.785 user 0m6.507s 00:10:33.785 sys 0m0.729s 00:10:33.785 22:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.785 22:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.785 END TEST raid_read_error_test 00:10:33.785 ************************************ 00:10:33.785 22:27:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:33.785 22:27:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:33.785 22:27:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.785 22:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.785 ************************************ 00:10:33.785 START TEST raid_write_error_test 00:10:33.785 ************************************ 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KGid6tzPph 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64132 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64132 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 64132 ']' 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.785 22:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.785 [2024-09-27 22:27:29.447298] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:33.785 [2024-09-27 22:27:29.447437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64132 ] 00:10:33.785 [2024-09-27 22:27:29.621294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.044 [2024-09-27 22:27:29.888263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.303 [2024-09-27 22:27:30.163470] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.303 [2024-09-27 22:27:30.163513] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.912 BaseBdev1_malloc 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.912 true 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.912 [2024-09-27 22:27:30.724790] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.912 [2024-09-27 22:27:30.725071] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.912 [2024-09-27 22:27:30.725109] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.912 [2024-09-27 22:27:30.725126] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.912 [2024-09-27 22:27:30.727825] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.912 [2024-09-27 22:27:30.727879] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.912 BaseBdev1 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.912 BaseBdev2_malloc 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.912 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.170 true 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.170 [2024-09-27 22:27:30.799474] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.170 [2024-09-27 22:27:30.799801] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.170 [2024-09-27 22:27:30.799854] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.170 [2024-09-27 22:27:30.799880] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.170 [2024-09-27 22:27:30.802619] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.170 [2024-09-27 22:27:30.802669] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.170 BaseBdev2 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.170 [2024-09-27 22:27:30.811520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.170 [2024-09-27 22:27:30.813808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.170 [2024-09-27 22:27:30.814062] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:35.170 [2024-09-27 22:27:30.814082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:35.170 [2024-09-27 22:27:30.814417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:35.170 [2024-09-27 22:27:30.814679] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:35.170 [2024-09-27 22:27:30.814697] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:35.170 [2024-09-27 22:27:30.814930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.170 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.171 "name": "raid_bdev1", 00:10:35.171 "uuid": "775736a7-7084-461e-9714-63c949178ad6", 00:10:35.171 "strip_size_kb": 0, 00:10:35.171 "state": "online", 00:10:35.171 "raid_level": "raid1", 00:10:35.171 "superblock": true, 00:10:35.171 "num_base_bdevs": 2, 00:10:35.171 "num_base_bdevs_discovered": 2, 00:10:35.171 "num_base_bdevs_operational": 2, 00:10:35.171 "base_bdevs_list": [ 00:10:35.171 { 00:10:35.171 "name": "BaseBdev1", 00:10:35.171 "uuid": "65203aeb-6997-5dbc-801a-1f38ade96628", 00:10:35.171 "is_configured": true, 00:10:35.171 "data_offset": 2048, 00:10:35.171 "data_size": 63488 00:10:35.171 }, 00:10:35.171 { 00:10:35.171 "name": "BaseBdev2", 00:10:35.171 "uuid": "7e7df980-9f32-56da-ba09-4dd56faf6e16", 00:10:35.171 "is_configured": true, 00:10:35.171 "data_offset": 2048, 00:10:35.171 "data_size": 63488 00:10:35.171 } 00:10:35.171 ] 00:10:35.171 }' 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.171 22:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.429 22:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.429 22:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.689 [2024-09-27 22:27:31.412888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 [2024-09-27 22:27:32.269491] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:36.626 [2024-09-27 22:27:32.269560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.626 [2024-09-27 22:27:32.269764] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.626 "name": "raid_bdev1", 00:10:36.626 "uuid": "775736a7-7084-461e-9714-63c949178ad6", 00:10:36.626 "strip_size_kb": 0, 00:10:36.626 "state": "online", 00:10:36.626 "raid_level": "raid1", 00:10:36.626 "superblock": true, 00:10:36.626 "num_base_bdevs": 2, 00:10:36.626 "num_base_bdevs_discovered": 1, 00:10:36.626 "num_base_bdevs_operational": 1, 00:10:36.626 "base_bdevs_list": [ 00:10:36.626 { 00:10:36.626 "name": null, 00:10:36.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.626 "is_configured": false, 00:10:36.626 "data_offset": 0, 00:10:36.626 "data_size": 63488 00:10:36.626 }, 00:10:36.626 { 00:10:36.626 "name": "BaseBdev2", 00:10:36.626 "uuid": "7e7df980-9f32-56da-ba09-4dd56faf6e16", 00:10:36.626 "is_configured": true, 00:10:36.626 "data_offset": 2048, 00:10:36.626 "data_size": 63488 00:10:36.626 } 00:10:36.626 ] 00:10:36.626 }' 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.626 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.885 [2024-09-27 22:27:32.731212] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.885 [2024-09-27 22:27:32.731245] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.885 [2024-09-27 22:27:32.734148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.885 [2024-09-27 22:27:32.734304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.885 [2024-09-27 22:27:32.734405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.885 [2024-09-27 22:27:32.734630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:36.885 { 00:10:36.885 "results": [ 00:10:36.885 { 00:10:36.885 "job": "raid_bdev1", 00:10:36.885 "core_mask": "0x1", 00:10:36.885 "workload": "randrw", 00:10:36.885 "percentage": 50, 00:10:36.885 "status": "finished", 00:10:36.885 "queue_depth": 1, 00:10:36.885 "io_size": 131072, 00:10:36.885 "runtime": 1.317937, 00:10:36.885 "iops": 19450.095110767812, 00:10:36.885 "mibps": 2431.2618888459765, 00:10:36.885 "io_failed": 0, 00:10:36.885 "io_timeout": 0, 00:10:36.885 "avg_latency_us": 48.37121481165357, 00:10:36.885 "min_latency_us": 25.805622489959838, 00:10:36.885 "max_latency_us": 1605.5004016064256 00:10:36.885 } 00:10:36.885 ], 00:10:36.885 "core_count": 1 00:10:36.885 } 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64132 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 64132 ']' 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 64132 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.885 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64132 00:10:37.143 killing process with pid 64132 00:10:37.143 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.143 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.143 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64132' 00:10:37.143 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 64132 00:10:37.143 [2024-09-27 22:27:32.786995] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.143 22:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 64132 00:10:37.143 [2024-09-27 22:27:32.935001] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KGid6tzPph 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:39.714 00:10:39.714 real 0m5.807s 00:10:39.714 user 0m6.701s 00:10:39.714 sys 0m0.692s 00:10:39.714 ************************************ 00:10:39.714 END TEST raid_write_error_test 00:10:39.714 ************************************ 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.714 22:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.714 22:27:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:39.714 22:27:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:39.714 22:27:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:39.714 22:27:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:39.714 22:27:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.714 22:27:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.714 ************************************ 00:10:39.714 START TEST raid_state_function_test 00:10:39.714 ************************************ 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:39.714 Process raid pid: 64286 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64286 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64286' 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64286 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 64286 ']' 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.714 22:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.714 [2024-09-27 22:27:35.324312] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:39.714 [2024-09-27 22:27:35.324459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.714 [2024-09-27 22:27:35.501091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.975 [2024-09-27 22:27:35.756213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.234 [2024-09-27 22:27:36.017616] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.234 [2024-09-27 22:27:36.017659] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.801 [2024-09-27 22:27:36.535173] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.801 [2024-09-27 22:27:36.535239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.801 [2024-09-27 22:27:36.535251] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.801 [2024-09-27 22:27:36.535266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.801 [2024-09-27 22:27:36.535274] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.801 [2024-09-27 22:27:36.535286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.801 "name": "Existed_Raid", 00:10:40.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.801 "strip_size_kb": 64, 00:10:40.801 "state": "configuring", 00:10:40.801 "raid_level": "raid0", 00:10:40.801 "superblock": false, 00:10:40.801 "num_base_bdevs": 3, 00:10:40.801 "num_base_bdevs_discovered": 0, 00:10:40.801 "num_base_bdevs_operational": 3, 00:10:40.801 "base_bdevs_list": [ 00:10:40.801 { 00:10:40.801 "name": "BaseBdev1", 00:10:40.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.801 "is_configured": false, 00:10:40.801 "data_offset": 0, 00:10:40.801 "data_size": 0 00:10:40.801 }, 00:10:40.801 { 00:10:40.801 "name": "BaseBdev2", 00:10:40.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.801 "is_configured": false, 00:10:40.801 "data_offset": 0, 00:10:40.801 "data_size": 0 00:10:40.801 }, 00:10:40.801 { 00:10:40.801 "name": "BaseBdev3", 00:10:40.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.801 "is_configured": false, 00:10:40.801 "data_offset": 0, 00:10:40.801 "data_size": 0 00:10:40.801 } 00:10:40.801 ] 00:10:40.801 }' 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.801 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.368 [2024-09-27 22:27:36.994399] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.368 [2024-09-27 22:27:36.994444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.368 22:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.368 [2024-09-27 22:27:37.006400] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.368 [2024-09-27 22:27:37.006588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.368 [2024-09-27 22:27:37.006610] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.368 [2024-09-27 22:27:37.006624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.368 [2024-09-27 22:27:37.006632] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.368 [2024-09-27 22:27:37.006644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.368 [2024-09-27 22:27:37.060457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.368 BaseBdev1 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.368 [ 00:10:41.368 { 00:10:41.368 "name": "BaseBdev1", 00:10:41.368 "aliases": [ 00:10:41.368 "58ad720c-db2f-4092-a256-34db43c7b3df" 00:10:41.368 ], 00:10:41.368 "product_name": "Malloc disk", 00:10:41.368 "block_size": 512, 00:10:41.368 "num_blocks": 65536, 00:10:41.368 "uuid": "58ad720c-db2f-4092-a256-34db43c7b3df", 00:10:41.368 "assigned_rate_limits": { 00:10:41.368 "rw_ios_per_sec": 0, 00:10:41.368 "rw_mbytes_per_sec": 0, 00:10:41.368 "r_mbytes_per_sec": 0, 00:10:41.368 "w_mbytes_per_sec": 0 00:10:41.368 }, 00:10:41.368 "claimed": true, 00:10:41.368 "claim_type": "exclusive_write", 00:10:41.368 "zoned": false, 00:10:41.368 "supported_io_types": { 00:10:41.368 "read": true, 00:10:41.368 "write": true, 00:10:41.368 "unmap": true, 00:10:41.368 "flush": true, 00:10:41.368 "reset": true, 00:10:41.368 "nvme_admin": false, 00:10:41.368 "nvme_io": false, 00:10:41.368 "nvme_io_md": false, 00:10:41.368 "write_zeroes": true, 00:10:41.368 "zcopy": true, 00:10:41.368 "get_zone_info": false, 00:10:41.368 "zone_management": false, 00:10:41.368 "zone_append": false, 00:10:41.368 "compare": false, 00:10:41.368 "compare_and_write": false, 00:10:41.368 "abort": true, 00:10:41.368 "seek_hole": false, 00:10:41.368 "seek_data": false, 00:10:41.368 "copy": true, 00:10:41.368 "nvme_iov_md": false 00:10:41.368 }, 00:10:41.368 "memory_domains": [ 00:10:41.368 { 00:10:41.368 "dma_device_id": "system", 00:10:41.368 "dma_device_type": 1 00:10:41.368 }, 00:10:41.368 { 00:10:41.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.368 "dma_device_type": 2 00:10:41.368 } 00:10:41.368 ], 00:10:41.368 "driver_specific": {} 00:10:41.368 } 00:10:41.368 ] 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.368 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.369 "name": "Existed_Raid", 00:10:41.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.369 "strip_size_kb": 64, 00:10:41.369 "state": "configuring", 00:10:41.369 "raid_level": "raid0", 00:10:41.369 "superblock": false, 00:10:41.369 "num_base_bdevs": 3, 00:10:41.369 "num_base_bdevs_discovered": 1, 00:10:41.369 "num_base_bdevs_operational": 3, 00:10:41.369 "base_bdevs_list": [ 00:10:41.369 { 00:10:41.369 "name": "BaseBdev1", 00:10:41.369 "uuid": "58ad720c-db2f-4092-a256-34db43c7b3df", 00:10:41.369 "is_configured": true, 00:10:41.369 "data_offset": 0, 00:10:41.369 "data_size": 65536 00:10:41.369 }, 00:10:41.369 { 00:10:41.369 "name": "BaseBdev2", 00:10:41.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.369 "is_configured": false, 00:10:41.369 "data_offset": 0, 00:10:41.369 "data_size": 0 00:10:41.369 }, 00:10:41.369 { 00:10:41.369 "name": "BaseBdev3", 00:10:41.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.369 "is_configured": false, 00:10:41.369 "data_offset": 0, 00:10:41.369 "data_size": 0 00:10:41.369 } 00:10:41.369 ] 00:10:41.369 }' 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.369 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.680 [2024-09-27 22:27:37.516104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.680 [2024-09-27 22:27:37.516161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.680 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.680 [2024-09-27 22:27:37.528162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.681 [2024-09-27 22:27:37.530287] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.681 [2024-09-27 22:27:37.530467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.681 [2024-09-27 22:27:37.530489] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.681 [2024-09-27 22:27:37.530503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.681 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.939 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.939 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.939 "name": "Existed_Raid", 00:10:41.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.939 "strip_size_kb": 64, 00:10:41.939 "state": "configuring", 00:10:41.939 "raid_level": "raid0", 00:10:41.939 "superblock": false, 00:10:41.939 "num_base_bdevs": 3, 00:10:41.939 "num_base_bdevs_discovered": 1, 00:10:41.939 "num_base_bdevs_operational": 3, 00:10:41.939 "base_bdevs_list": [ 00:10:41.939 { 00:10:41.939 "name": "BaseBdev1", 00:10:41.939 "uuid": "58ad720c-db2f-4092-a256-34db43c7b3df", 00:10:41.939 "is_configured": true, 00:10:41.939 "data_offset": 0, 00:10:41.939 "data_size": 65536 00:10:41.939 }, 00:10:41.939 { 00:10:41.939 "name": "BaseBdev2", 00:10:41.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.939 "is_configured": false, 00:10:41.939 "data_offset": 0, 00:10:41.939 "data_size": 0 00:10:41.939 }, 00:10:41.939 { 00:10:41.939 "name": "BaseBdev3", 00:10:41.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.939 "is_configured": false, 00:10:41.939 "data_offset": 0, 00:10:41.939 "data_size": 0 00:10:41.939 } 00:10:41.939 ] 00:10:41.939 }' 00:10:41.939 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.939 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.197 22:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.197 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.197 22:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.197 [2024-09-27 22:27:38.025768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.197 BaseBdev2 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.197 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.197 [ 00:10:42.197 { 00:10:42.197 "name": "BaseBdev2", 00:10:42.197 "aliases": [ 00:10:42.197 "482d817b-31fb-4b18-97b8-83bf08d1cb2a" 00:10:42.197 ], 00:10:42.197 "product_name": "Malloc disk", 00:10:42.198 "block_size": 512, 00:10:42.198 "num_blocks": 65536, 00:10:42.198 "uuid": "482d817b-31fb-4b18-97b8-83bf08d1cb2a", 00:10:42.198 "assigned_rate_limits": { 00:10:42.198 "rw_ios_per_sec": 0, 00:10:42.198 "rw_mbytes_per_sec": 0, 00:10:42.198 "r_mbytes_per_sec": 0, 00:10:42.198 "w_mbytes_per_sec": 0 00:10:42.198 }, 00:10:42.198 "claimed": true, 00:10:42.198 "claim_type": "exclusive_write", 00:10:42.198 "zoned": false, 00:10:42.198 "supported_io_types": { 00:10:42.198 "read": true, 00:10:42.198 "write": true, 00:10:42.198 "unmap": true, 00:10:42.198 "flush": true, 00:10:42.198 "reset": true, 00:10:42.198 "nvme_admin": false, 00:10:42.198 "nvme_io": false, 00:10:42.198 "nvme_io_md": false, 00:10:42.198 "write_zeroes": true, 00:10:42.198 "zcopy": true, 00:10:42.198 "get_zone_info": false, 00:10:42.198 "zone_management": false, 00:10:42.198 "zone_append": false, 00:10:42.198 "compare": false, 00:10:42.198 "compare_and_write": false, 00:10:42.198 "abort": true, 00:10:42.198 "seek_hole": false, 00:10:42.198 "seek_data": false, 00:10:42.198 "copy": true, 00:10:42.198 "nvme_iov_md": false 00:10:42.198 }, 00:10:42.198 "memory_domains": [ 00:10:42.198 { 00:10:42.198 "dma_device_id": "system", 00:10:42.198 "dma_device_type": 1 00:10:42.198 }, 00:10:42.198 { 00:10:42.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.198 "dma_device_type": 2 00:10:42.198 } 00:10:42.198 ], 00:10:42.198 "driver_specific": {} 00:10:42.198 } 00:10:42.198 ] 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.198 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.457 "name": "Existed_Raid", 00:10:42.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.457 "strip_size_kb": 64, 00:10:42.457 "state": "configuring", 00:10:42.457 "raid_level": "raid0", 00:10:42.457 "superblock": false, 00:10:42.457 "num_base_bdevs": 3, 00:10:42.457 "num_base_bdevs_discovered": 2, 00:10:42.457 "num_base_bdevs_operational": 3, 00:10:42.457 "base_bdevs_list": [ 00:10:42.457 { 00:10:42.457 "name": "BaseBdev1", 00:10:42.457 "uuid": "58ad720c-db2f-4092-a256-34db43c7b3df", 00:10:42.457 "is_configured": true, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 65536 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "name": "BaseBdev2", 00:10:42.457 "uuid": "482d817b-31fb-4b18-97b8-83bf08d1cb2a", 00:10:42.457 "is_configured": true, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 65536 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "name": "BaseBdev3", 00:10:42.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.457 "is_configured": false, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 0 00:10:42.457 } 00:10:42.457 ] 00:10:42.457 }' 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.457 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.716 [2024-09-27 22:27:38.537241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.716 [2024-09-27 22:27:38.537293] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.716 [2024-09-27 22:27:38.537310] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:42.716 [2024-09-27 22:27:38.537585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:42.716 [2024-09-27 22:27:38.537733] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.716 [2024-09-27 22:27:38.537747] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:42.716 [2024-09-27 22:27:38.538037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.716 BaseBdev3 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.716 [ 00:10:42.716 { 00:10:42.716 "name": "BaseBdev3", 00:10:42.716 "aliases": [ 00:10:42.716 "6db5cebd-5780-4ac7-b768-4a5158c5b8b2" 00:10:42.716 ], 00:10:42.716 "product_name": "Malloc disk", 00:10:42.716 "block_size": 512, 00:10:42.716 "num_blocks": 65536, 00:10:42.716 "uuid": "6db5cebd-5780-4ac7-b768-4a5158c5b8b2", 00:10:42.716 "assigned_rate_limits": { 00:10:42.716 "rw_ios_per_sec": 0, 00:10:42.716 "rw_mbytes_per_sec": 0, 00:10:42.716 "r_mbytes_per_sec": 0, 00:10:42.716 "w_mbytes_per_sec": 0 00:10:42.716 }, 00:10:42.716 "claimed": true, 00:10:42.716 "claim_type": "exclusive_write", 00:10:42.716 "zoned": false, 00:10:42.716 "supported_io_types": { 00:10:42.716 "read": true, 00:10:42.716 "write": true, 00:10:42.716 "unmap": true, 00:10:42.716 "flush": true, 00:10:42.716 "reset": true, 00:10:42.716 "nvme_admin": false, 00:10:42.716 "nvme_io": false, 00:10:42.716 "nvme_io_md": false, 00:10:42.716 "write_zeroes": true, 00:10:42.716 "zcopy": true, 00:10:42.716 "get_zone_info": false, 00:10:42.716 "zone_management": false, 00:10:42.716 "zone_append": false, 00:10:42.716 "compare": false, 00:10:42.716 "compare_and_write": false, 00:10:42.716 "abort": true, 00:10:42.716 "seek_hole": false, 00:10:42.716 "seek_data": false, 00:10:42.716 "copy": true, 00:10:42.716 "nvme_iov_md": false 00:10:42.716 }, 00:10:42.716 "memory_domains": [ 00:10:42.716 { 00:10:42.716 "dma_device_id": "system", 00:10:42.716 "dma_device_type": 1 00:10:42.716 }, 00:10:42.716 { 00:10:42.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.716 "dma_device_type": 2 00:10:42.716 } 00:10:42.716 ], 00:10:42.716 "driver_specific": {} 00:10:42.716 } 00:10:42.716 ] 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.716 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.717 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.717 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.975 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.975 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.975 "name": "Existed_Raid", 00:10:42.975 "uuid": "08900516-3348-48b7-b2cc-8de6f4865f89", 00:10:42.975 "strip_size_kb": 64, 00:10:42.975 "state": "online", 00:10:42.975 "raid_level": "raid0", 00:10:42.975 "superblock": false, 00:10:42.975 "num_base_bdevs": 3, 00:10:42.975 "num_base_bdevs_discovered": 3, 00:10:42.975 "num_base_bdevs_operational": 3, 00:10:42.975 "base_bdevs_list": [ 00:10:42.975 { 00:10:42.975 "name": "BaseBdev1", 00:10:42.975 "uuid": "58ad720c-db2f-4092-a256-34db43c7b3df", 00:10:42.975 "is_configured": true, 00:10:42.975 "data_offset": 0, 00:10:42.975 "data_size": 65536 00:10:42.975 }, 00:10:42.975 { 00:10:42.975 "name": "BaseBdev2", 00:10:42.975 "uuid": "482d817b-31fb-4b18-97b8-83bf08d1cb2a", 00:10:42.975 "is_configured": true, 00:10:42.975 "data_offset": 0, 00:10:42.975 "data_size": 65536 00:10:42.975 }, 00:10:42.975 { 00:10:42.975 "name": "BaseBdev3", 00:10:42.975 "uuid": "6db5cebd-5780-4ac7-b768-4a5158c5b8b2", 00:10:42.975 "is_configured": true, 00:10:42.975 "data_offset": 0, 00:10:42.975 "data_size": 65536 00:10:42.975 } 00:10:42.975 ] 00:10:42.975 }' 00:10:42.975 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.975 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.234 22:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.234 [2024-09-27 22:27:39.001043] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.234 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.234 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.234 "name": "Existed_Raid", 00:10:43.234 "aliases": [ 00:10:43.234 "08900516-3348-48b7-b2cc-8de6f4865f89" 00:10:43.234 ], 00:10:43.234 "product_name": "Raid Volume", 00:10:43.234 "block_size": 512, 00:10:43.234 "num_blocks": 196608, 00:10:43.234 "uuid": "08900516-3348-48b7-b2cc-8de6f4865f89", 00:10:43.234 "assigned_rate_limits": { 00:10:43.234 "rw_ios_per_sec": 0, 00:10:43.234 "rw_mbytes_per_sec": 0, 00:10:43.234 "r_mbytes_per_sec": 0, 00:10:43.234 "w_mbytes_per_sec": 0 00:10:43.234 }, 00:10:43.234 "claimed": false, 00:10:43.234 "zoned": false, 00:10:43.234 "supported_io_types": { 00:10:43.234 "read": true, 00:10:43.234 "write": true, 00:10:43.234 "unmap": true, 00:10:43.234 "flush": true, 00:10:43.234 "reset": true, 00:10:43.234 "nvme_admin": false, 00:10:43.234 "nvme_io": false, 00:10:43.234 "nvme_io_md": false, 00:10:43.234 "write_zeroes": true, 00:10:43.234 "zcopy": false, 00:10:43.234 "get_zone_info": false, 00:10:43.234 "zone_management": false, 00:10:43.234 "zone_append": false, 00:10:43.234 "compare": false, 00:10:43.234 "compare_and_write": false, 00:10:43.234 "abort": false, 00:10:43.234 "seek_hole": false, 00:10:43.234 "seek_data": false, 00:10:43.234 "copy": false, 00:10:43.234 "nvme_iov_md": false 00:10:43.234 }, 00:10:43.234 "memory_domains": [ 00:10:43.234 { 00:10:43.234 "dma_device_id": "system", 00:10:43.234 "dma_device_type": 1 00:10:43.234 }, 00:10:43.234 { 00:10:43.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.234 "dma_device_type": 2 00:10:43.234 }, 00:10:43.234 { 00:10:43.234 "dma_device_id": "system", 00:10:43.234 "dma_device_type": 1 00:10:43.234 }, 00:10:43.234 { 00:10:43.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.235 "dma_device_type": 2 00:10:43.235 }, 00:10:43.235 { 00:10:43.235 "dma_device_id": "system", 00:10:43.235 "dma_device_type": 1 00:10:43.235 }, 00:10:43.235 { 00:10:43.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.235 "dma_device_type": 2 00:10:43.235 } 00:10:43.235 ], 00:10:43.235 "driver_specific": { 00:10:43.235 "raid": { 00:10:43.235 "uuid": "08900516-3348-48b7-b2cc-8de6f4865f89", 00:10:43.235 "strip_size_kb": 64, 00:10:43.235 "state": "online", 00:10:43.235 "raid_level": "raid0", 00:10:43.235 "superblock": false, 00:10:43.235 "num_base_bdevs": 3, 00:10:43.235 "num_base_bdevs_discovered": 3, 00:10:43.235 "num_base_bdevs_operational": 3, 00:10:43.235 "base_bdevs_list": [ 00:10:43.235 { 00:10:43.235 "name": "BaseBdev1", 00:10:43.235 "uuid": "58ad720c-db2f-4092-a256-34db43c7b3df", 00:10:43.235 "is_configured": true, 00:10:43.235 "data_offset": 0, 00:10:43.235 "data_size": 65536 00:10:43.235 }, 00:10:43.235 { 00:10:43.235 "name": "BaseBdev2", 00:10:43.235 "uuid": "482d817b-31fb-4b18-97b8-83bf08d1cb2a", 00:10:43.235 "is_configured": true, 00:10:43.235 "data_offset": 0, 00:10:43.235 "data_size": 65536 00:10:43.235 }, 00:10:43.235 { 00:10:43.235 "name": "BaseBdev3", 00:10:43.235 "uuid": "6db5cebd-5780-4ac7-b768-4a5158c5b8b2", 00:10:43.235 "is_configured": true, 00:10:43.235 "data_offset": 0, 00:10:43.235 "data_size": 65536 00:10:43.235 } 00:10:43.235 ] 00:10:43.235 } 00:10:43.235 } 00:10:43.235 }' 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:43.235 BaseBdev2 00:10:43.235 BaseBdev3' 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.235 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.494 [2024-09-27 22:27:39.244385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.494 [2024-09-27 22:27:39.244418] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.494 [2024-09-27 22:27:39.244476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.494 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.753 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.753 "name": "Existed_Raid", 00:10:43.753 "uuid": "08900516-3348-48b7-b2cc-8de6f4865f89", 00:10:43.753 "strip_size_kb": 64, 00:10:43.753 "state": "offline", 00:10:43.753 "raid_level": "raid0", 00:10:43.753 "superblock": false, 00:10:43.753 "num_base_bdevs": 3, 00:10:43.753 "num_base_bdevs_discovered": 2, 00:10:43.753 "num_base_bdevs_operational": 2, 00:10:43.753 "base_bdevs_list": [ 00:10:43.753 { 00:10:43.753 "name": null, 00:10:43.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.753 "is_configured": false, 00:10:43.753 "data_offset": 0, 00:10:43.753 "data_size": 65536 00:10:43.753 }, 00:10:43.753 { 00:10:43.753 "name": "BaseBdev2", 00:10:43.753 "uuid": "482d817b-31fb-4b18-97b8-83bf08d1cb2a", 00:10:43.753 "is_configured": true, 00:10:43.753 "data_offset": 0, 00:10:43.753 "data_size": 65536 00:10:43.753 }, 00:10:43.753 { 00:10:43.753 "name": "BaseBdev3", 00:10:43.753 "uuid": "6db5cebd-5780-4ac7-b768-4a5158c5b8b2", 00:10:43.753 "is_configured": true, 00:10:43.753 "data_offset": 0, 00:10:43.753 "data_size": 65536 00:10:43.753 } 00:10:43.753 ] 00:10:43.753 }' 00:10:43.753 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.753 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.012 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.012 [2024-09-27 22:27:39.808148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.270 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.271 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.271 22:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:44.271 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.271 22:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.271 [2024-09-27 22:27:39.958211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.271 [2024-09-27 22:27:39.958267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.271 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 BaseBdev2 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 [ 00:10:44.530 { 00:10:44.530 "name": "BaseBdev2", 00:10:44.530 "aliases": [ 00:10:44.530 "caad8d14-21cd-487b-8567-3697517fdfcb" 00:10:44.530 ], 00:10:44.530 "product_name": "Malloc disk", 00:10:44.530 "block_size": 512, 00:10:44.530 "num_blocks": 65536, 00:10:44.530 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:44.530 "assigned_rate_limits": { 00:10:44.530 "rw_ios_per_sec": 0, 00:10:44.530 "rw_mbytes_per_sec": 0, 00:10:44.530 "r_mbytes_per_sec": 0, 00:10:44.530 "w_mbytes_per_sec": 0 00:10:44.530 }, 00:10:44.530 "claimed": false, 00:10:44.530 "zoned": false, 00:10:44.530 "supported_io_types": { 00:10:44.530 "read": true, 00:10:44.530 "write": true, 00:10:44.530 "unmap": true, 00:10:44.530 "flush": true, 00:10:44.530 "reset": true, 00:10:44.530 "nvme_admin": false, 00:10:44.530 "nvme_io": false, 00:10:44.530 "nvme_io_md": false, 00:10:44.530 "write_zeroes": true, 00:10:44.530 "zcopy": true, 00:10:44.530 "get_zone_info": false, 00:10:44.530 "zone_management": false, 00:10:44.530 "zone_append": false, 00:10:44.530 "compare": false, 00:10:44.530 "compare_and_write": false, 00:10:44.530 "abort": true, 00:10:44.530 "seek_hole": false, 00:10:44.530 "seek_data": false, 00:10:44.530 "copy": true, 00:10:44.530 "nvme_iov_md": false 00:10:44.530 }, 00:10:44.530 "memory_domains": [ 00:10:44.530 { 00:10:44.530 "dma_device_id": "system", 00:10:44.530 "dma_device_type": 1 00:10:44.530 }, 00:10:44.530 { 00:10:44.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.530 "dma_device_type": 2 00:10:44.530 } 00:10:44.530 ], 00:10:44.530 "driver_specific": {} 00:10:44.530 } 00:10:44.530 ] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 BaseBdev3 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 [ 00:10:44.530 { 00:10:44.530 "name": "BaseBdev3", 00:10:44.530 "aliases": [ 00:10:44.530 "696d6c9f-fd82-471c-8323-e15f054696e7" 00:10:44.530 ], 00:10:44.530 "product_name": "Malloc disk", 00:10:44.530 "block_size": 512, 00:10:44.530 "num_blocks": 65536, 00:10:44.530 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:44.530 "assigned_rate_limits": { 00:10:44.530 "rw_ios_per_sec": 0, 00:10:44.530 "rw_mbytes_per_sec": 0, 00:10:44.530 "r_mbytes_per_sec": 0, 00:10:44.530 "w_mbytes_per_sec": 0 00:10:44.530 }, 00:10:44.530 "claimed": false, 00:10:44.530 "zoned": false, 00:10:44.530 "supported_io_types": { 00:10:44.530 "read": true, 00:10:44.530 "write": true, 00:10:44.530 "unmap": true, 00:10:44.530 "flush": true, 00:10:44.530 "reset": true, 00:10:44.530 "nvme_admin": false, 00:10:44.530 "nvme_io": false, 00:10:44.530 "nvme_io_md": false, 00:10:44.530 "write_zeroes": true, 00:10:44.530 "zcopy": true, 00:10:44.530 "get_zone_info": false, 00:10:44.530 "zone_management": false, 00:10:44.530 "zone_append": false, 00:10:44.530 "compare": false, 00:10:44.530 "compare_and_write": false, 00:10:44.530 "abort": true, 00:10:44.530 "seek_hole": false, 00:10:44.530 "seek_data": false, 00:10:44.530 "copy": true, 00:10:44.530 "nvme_iov_md": false 00:10:44.530 }, 00:10:44.530 "memory_domains": [ 00:10:44.530 { 00:10:44.530 "dma_device_id": "system", 00:10:44.530 "dma_device_type": 1 00:10:44.530 }, 00:10:44.530 { 00:10:44.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.530 "dma_device_type": 2 00:10:44.530 } 00:10:44.530 ], 00:10:44.530 "driver_specific": {} 00:10:44.530 } 00:10:44.530 ] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.530 [2024-09-27 22:27:40.305918] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.530 [2024-09-27 22:27:40.305991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.530 [2024-09-27 22:27:40.306022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.530 [2024-09-27 22:27:40.308272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.530 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.531 "name": "Existed_Raid", 00:10:44.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.531 "strip_size_kb": 64, 00:10:44.531 "state": "configuring", 00:10:44.531 "raid_level": "raid0", 00:10:44.531 "superblock": false, 00:10:44.531 "num_base_bdevs": 3, 00:10:44.531 "num_base_bdevs_discovered": 2, 00:10:44.531 "num_base_bdevs_operational": 3, 00:10:44.531 "base_bdevs_list": [ 00:10:44.531 { 00:10:44.531 "name": "BaseBdev1", 00:10:44.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.531 "is_configured": false, 00:10:44.531 "data_offset": 0, 00:10:44.531 "data_size": 0 00:10:44.531 }, 00:10:44.531 { 00:10:44.531 "name": "BaseBdev2", 00:10:44.531 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:44.531 "is_configured": true, 00:10:44.531 "data_offset": 0, 00:10:44.531 "data_size": 65536 00:10:44.531 }, 00:10:44.531 { 00:10:44.531 "name": "BaseBdev3", 00:10:44.531 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:44.531 "is_configured": true, 00:10:44.531 "data_offset": 0, 00:10:44.531 "data_size": 65536 00:10:44.531 } 00:10:44.531 ] 00:10:44.531 }' 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.531 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.098 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:45.098 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.099 [2024-09-27 22:27:40.713304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.099 "name": "Existed_Raid", 00:10:45.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.099 "strip_size_kb": 64, 00:10:45.099 "state": "configuring", 00:10:45.099 "raid_level": "raid0", 00:10:45.099 "superblock": false, 00:10:45.099 "num_base_bdevs": 3, 00:10:45.099 "num_base_bdevs_discovered": 1, 00:10:45.099 "num_base_bdevs_operational": 3, 00:10:45.099 "base_bdevs_list": [ 00:10:45.099 { 00:10:45.099 "name": "BaseBdev1", 00:10:45.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.099 "is_configured": false, 00:10:45.099 "data_offset": 0, 00:10:45.099 "data_size": 0 00:10:45.099 }, 00:10:45.099 { 00:10:45.099 "name": null, 00:10:45.099 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:45.099 "is_configured": false, 00:10:45.099 "data_offset": 0, 00:10:45.099 "data_size": 65536 00:10:45.099 }, 00:10:45.099 { 00:10:45.099 "name": "BaseBdev3", 00:10:45.099 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:45.099 "is_configured": true, 00:10:45.099 "data_offset": 0, 00:10:45.099 "data_size": 65536 00:10:45.099 } 00:10:45.099 ] 00:10:45.099 }' 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.099 22:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.358 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.617 [2024-09-27 22:27:41.247197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.617 BaseBdev1 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.617 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.618 [ 00:10:45.618 { 00:10:45.618 "name": "BaseBdev1", 00:10:45.618 "aliases": [ 00:10:45.618 "7771da14-abca-4bd6-836a-0a9304f94617" 00:10:45.618 ], 00:10:45.618 "product_name": "Malloc disk", 00:10:45.618 "block_size": 512, 00:10:45.618 "num_blocks": 65536, 00:10:45.618 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:45.618 "assigned_rate_limits": { 00:10:45.618 "rw_ios_per_sec": 0, 00:10:45.618 "rw_mbytes_per_sec": 0, 00:10:45.618 "r_mbytes_per_sec": 0, 00:10:45.618 "w_mbytes_per_sec": 0 00:10:45.618 }, 00:10:45.618 "claimed": true, 00:10:45.618 "claim_type": "exclusive_write", 00:10:45.618 "zoned": false, 00:10:45.618 "supported_io_types": { 00:10:45.618 "read": true, 00:10:45.618 "write": true, 00:10:45.618 "unmap": true, 00:10:45.618 "flush": true, 00:10:45.618 "reset": true, 00:10:45.618 "nvme_admin": false, 00:10:45.618 "nvme_io": false, 00:10:45.618 "nvme_io_md": false, 00:10:45.618 "write_zeroes": true, 00:10:45.618 "zcopy": true, 00:10:45.618 "get_zone_info": false, 00:10:45.618 "zone_management": false, 00:10:45.618 "zone_append": false, 00:10:45.618 "compare": false, 00:10:45.618 "compare_and_write": false, 00:10:45.618 "abort": true, 00:10:45.618 "seek_hole": false, 00:10:45.618 "seek_data": false, 00:10:45.618 "copy": true, 00:10:45.618 "nvme_iov_md": false 00:10:45.618 }, 00:10:45.618 "memory_domains": [ 00:10:45.618 { 00:10:45.618 "dma_device_id": "system", 00:10:45.618 "dma_device_type": 1 00:10:45.618 }, 00:10:45.618 { 00:10:45.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.618 "dma_device_type": 2 00:10:45.618 } 00:10:45.618 ], 00:10:45.618 "driver_specific": {} 00:10:45.618 } 00:10:45.618 ] 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.618 "name": "Existed_Raid", 00:10:45.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.618 "strip_size_kb": 64, 00:10:45.618 "state": "configuring", 00:10:45.618 "raid_level": "raid0", 00:10:45.618 "superblock": false, 00:10:45.618 "num_base_bdevs": 3, 00:10:45.618 "num_base_bdevs_discovered": 2, 00:10:45.618 "num_base_bdevs_operational": 3, 00:10:45.618 "base_bdevs_list": [ 00:10:45.618 { 00:10:45.618 "name": "BaseBdev1", 00:10:45.618 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:45.618 "is_configured": true, 00:10:45.618 "data_offset": 0, 00:10:45.618 "data_size": 65536 00:10:45.618 }, 00:10:45.618 { 00:10:45.618 "name": null, 00:10:45.618 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:45.618 "is_configured": false, 00:10:45.618 "data_offset": 0, 00:10:45.618 "data_size": 65536 00:10:45.618 }, 00:10:45.618 { 00:10:45.618 "name": "BaseBdev3", 00:10:45.618 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:45.618 "is_configured": true, 00:10:45.618 "data_offset": 0, 00:10:45.618 "data_size": 65536 00:10:45.618 } 00:10:45.618 ] 00:10:45.618 }' 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.618 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.877 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.877 [2024-09-27 22:27:41.747150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.137 "name": "Existed_Raid", 00:10:46.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.137 "strip_size_kb": 64, 00:10:46.137 "state": "configuring", 00:10:46.137 "raid_level": "raid0", 00:10:46.137 "superblock": false, 00:10:46.137 "num_base_bdevs": 3, 00:10:46.137 "num_base_bdevs_discovered": 1, 00:10:46.137 "num_base_bdevs_operational": 3, 00:10:46.137 "base_bdevs_list": [ 00:10:46.137 { 00:10:46.137 "name": "BaseBdev1", 00:10:46.137 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:46.137 "is_configured": true, 00:10:46.137 "data_offset": 0, 00:10:46.137 "data_size": 65536 00:10:46.137 }, 00:10:46.137 { 00:10:46.137 "name": null, 00:10:46.137 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:46.137 "is_configured": false, 00:10:46.137 "data_offset": 0, 00:10:46.137 "data_size": 65536 00:10:46.137 }, 00:10:46.137 { 00:10:46.137 "name": null, 00:10:46.137 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:46.137 "is_configured": false, 00:10:46.137 "data_offset": 0, 00:10:46.137 "data_size": 65536 00:10:46.137 } 00:10:46.137 ] 00:10:46.137 }' 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.137 22:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 [2024-09-27 22:27:42.222661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.396 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.655 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.655 "name": "Existed_Raid", 00:10:46.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.655 "strip_size_kb": 64, 00:10:46.655 "state": "configuring", 00:10:46.655 "raid_level": "raid0", 00:10:46.655 "superblock": false, 00:10:46.655 "num_base_bdevs": 3, 00:10:46.655 "num_base_bdevs_discovered": 2, 00:10:46.655 "num_base_bdevs_operational": 3, 00:10:46.655 "base_bdevs_list": [ 00:10:46.655 { 00:10:46.655 "name": "BaseBdev1", 00:10:46.655 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:46.655 "is_configured": true, 00:10:46.655 "data_offset": 0, 00:10:46.655 "data_size": 65536 00:10:46.655 }, 00:10:46.655 { 00:10:46.655 "name": null, 00:10:46.655 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:46.655 "is_configured": false, 00:10:46.655 "data_offset": 0, 00:10:46.655 "data_size": 65536 00:10:46.655 }, 00:10:46.655 { 00:10:46.655 "name": "BaseBdev3", 00:10:46.655 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:46.655 "is_configured": true, 00:10:46.655 "data_offset": 0, 00:10:46.655 "data_size": 65536 00:10:46.655 } 00:10:46.655 ] 00:10:46.655 }' 00:10:46.655 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.655 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.912 [2024-09-27 22:27:42.670191] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.912 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.170 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.170 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.170 "name": "Existed_Raid", 00:10:47.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.170 "strip_size_kb": 64, 00:10:47.170 "state": "configuring", 00:10:47.170 "raid_level": "raid0", 00:10:47.170 "superblock": false, 00:10:47.170 "num_base_bdevs": 3, 00:10:47.170 "num_base_bdevs_discovered": 1, 00:10:47.170 "num_base_bdevs_operational": 3, 00:10:47.170 "base_bdevs_list": [ 00:10:47.170 { 00:10:47.170 "name": null, 00:10:47.170 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:47.170 "is_configured": false, 00:10:47.170 "data_offset": 0, 00:10:47.170 "data_size": 65536 00:10:47.170 }, 00:10:47.170 { 00:10:47.170 "name": null, 00:10:47.170 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:47.170 "is_configured": false, 00:10:47.170 "data_offset": 0, 00:10:47.170 "data_size": 65536 00:10:47.170 }, 00:10:47.170 { 00:10:47.170 "name": "BaseBdev3", 00:10:47.170 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:47.170 "is_configured": true, 00:10:47.170 "data_offset": 0, 00:10:47.170 "data_size": 65536 00:10:47.170 } 00:10:47.170 ] 00:10:47.170 }' 00:10:47.170 22:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.170 22:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 [2024-09-27 22:27:43.215996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.429 "name": "Existed_Raid", 00:10:47.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.429 "strip_size_kb": 64, 00:10:47.429 "state": "configuring", 00:10:47.429 "raid_level": "raid0", 00:10:47.429 "superblock": false, 00:10:47.429 "num_base_bdevs": 3, 00:10:47.429 "num_base_bdevs_discovered": 2, 00:10:47.429 "num_base_bdevs_operational": 3, 00:10:47.429 "base_bdevs_list": [ 00:10:47.429 { 00:10:47.429 "name": null, 00:10:47.429 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:47.429 "is_configured": false, 00:10:47.429 "data_offset": 0, 00:10:47.429 "data_size": 65536 00:10:47.429 }, 00:10:47.429 { 00:10:47.429 "name": "BaseBdev2", 00:10:47.429 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:47.429 "is_configured": true, 00:10:47.429 "data_offset": 0, 00:10:47.429 "data_size": 65536 00:10:47.429 }, 00:10:47.429 { 00:10:47.429 "name": "BaseBdev3", 00:10:47.429 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:47.429 "is_configured": true, 00:10:47.429 "data_offset": 0, 00:10:47.429 "data_size": 65536 00:10:47.429 } 00:10:47.429 ] 00:10:47.429 }' 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.429 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7771da14-abca-4bd6-836a-0a9304f94617 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.996 [2024-09-27 22:27:43.736189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:47.996 [2024-09-27 22:27:43.736245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:47.996 [2024-09-27 22:27:43.736258] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:47.996 [2024-09-27 22:27:43.736552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:47.996 [2024-09-27 22:27:43.736696] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:47.996 [2024-09-27 22:27:43.736705] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:47.996 [2024-09-27 22:27:43.736994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.996 NewBaseBdev 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.996 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.996 [ 00:10:47.996 { 00:10:47.996 "name": "NewBaseBdev", 00:10:47.996 "aliases": [ 00:10:47.996 "7771da14-abca-4bd6-836a-0a9304f94617" 00:10:47.996 ], 00:10:47.996 "product_name": "Malloc disk", 00:10:47.996 "block_size": 512, 00:10:47.996 "num_blocks": 65536, 00:10:47.996 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:47.997 "assigned_rate_limits": { 00:10:47.997 "rw_ios_per_sec": 0, 00:10:47.997 "rw_mbytes_per_sec": 0, 00:10:47.997 "r_mbytes_per_sec": 0, 00:10:47.997 "w_mbytes_per_sec": 0 00:10:47.997 }, 00:10:47.997 "claimed": true, 00:10:47.997 "claim_type": "exclusive_write", 00:10:47.997 "zoned": false, 00:10:47.997 "supported_io_types": { 00:10:47.997 "read": true, 00:10:47.997 "write": true, 00:10:47.997 "unmap": true, 00:10:47.997 "flush": true, 00:10:47.997 "reset": true, 00:10:47.997 "nvme_admin": false, 00:10:47.997 "nvme_io": false, 00:10:47.997 "nvme_io_md": false, 00:10:47.997 "write_zeroes": true, 00:10:47.997 "zcopy": true, 00:10:47.997 "get_zone_info": false, 00:10:47.997 "zone_management": false, 00:10:47.997 "zone_append": false, 00:10:47.997 "compare": false, 00:10:47.997 "compare_and_write": false, 00:10:47.997 "abort": true, 00:10:47.997 "seek_hole": false, 00:10:47.997 "seek_data": false, 00:10:47.997 "copy": true, 00:10:47.997 "nvme_iov_md": false 00:10:47.997 }, 00:10:47.997 "memory_domains": [ 00:10:47.997 { 00:10:47.997 "dma_device_id": "system", 00:10:47.997 "dma_device_type": 1 00:10:47.997 }, 00:10:47.997 { 00:10:47.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.997 "dma_device_type": 2 00:10:47.997 } 00:10:47.997 ], 00:10:47.997 "driver_specific": {} 00:10:47.997 } 00:10:47.997 ] 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.997 "name": "Existed_Raid", 00:10:47.997 "uuid": "823a00b4-4d15-4ef2-913d-8a764d141c75", 00:10:47.997 "strip_size_kb": 64, 00:10:47.997 "state": "online", 00:10:47.997 "raid_level": "raid0", 00:10:47.997 "superblock": false, 00:10:47.997 "num_base_bdevs": 3, 00:10:47.997 "num_base_bdevs_discovered": 3, 00:10:47.997 "num_base_bdevs_operational": 3, 00:10:47.997 "base_bdevs_list": [ 00:10:47.997 { 00:10:47.997 "name": "NewBaseBdev", 00:10:47.997 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:47.997 "is_configured": true, 00:10:47.997 "data_offset": 0, 00:10:47.997 "data_size": 65536 00:10:47.997 }, 00:10:47.997 { 00:10:47.997 "name": "BaseBdev2", 00:10:47.997 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:47.997 "is_configured": true, 00:10:47.997 "data_offset": 0, 00:10:47.997 "data_size": 65536 00:10:47.997 }, 00:10:47.997 { 00:10:47.997 "name": "BaseBdev3", 00:10:47.997 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:47.997 "is_configured": true, 00:10:47.997 "data_offset": 0, 00:10:47.997 "data_size": 65536 00:10:47.997 } 00:10:47.997 ] 00:10:47.997 }' 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.997 22:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.566 [2024-09-27 22:27:44.235836] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.566 "name": "Existed_Raid", 00:10:48.566 "aliases": [ 00:10:48.566 "823a00b4-4d15-4ef2-913d-8a764d141c75" 00:10:48.566 ], 00:10:48.566 "product_name": "Raid Volume", 00:10:48.566 "block_size": 512, 00:10:48.566 "num_blocks": 196608, 00:10:48.566 "uuid": "823a00b4-4d15-4ef2-913d-8a764d141c75", 00:10:48.566 "assigned_rate_limits": { 00:10:48.566 "rw_ios_per_sec": 0, 00:10:48.566 "rw_mbytes_per_sec": 0, 00:10:48.566 "r_mbytes_per_sec": 0, 00:10:48.566 "w_mbytes_per_sec": 0 00:10:48.566 }, 00:10:48.566 "claimed": false, 00:10:48.566 "zoned": false, 00:10:48.566 "supported_io_types": { 00:10:48.566 "read": true, 00:10:48.566 "write": true, 00:10:48.566 "unmap": true, 00:10:48.566 "flush": true, 00:10:48.566 "reset": true, 00:10:48.566 "nvme_admin": false, 00:10:48.566 "nvme_io": false, 00:10:48.566 "nvme_io_md": false, 00:10:48.566 "write_zeroes": true, 00:10:48.566 "zcopy": false, 00:10:48.566 "get_zone_info": false, 00:10:48.566 "zone_management": false, 00:10:48.566 "zone_append": false, 00:10:48.566 "compare": false, 00:10:48.566 "compare_and_write": false, 00:10:48.566 "abort": false, 00:10:48.566 "seek_hole": false, 00:10:48.566 "seek_data": false, 00:10:48.566 "copy": false, 00:10:48.566 "nvme_iov_md": false 00:10:48.566 }, 00:10:48.566 "memory_domains": [ 00:10:48.566 { 00:10:48.566 "dma_device_id": "system", 00:10:48.566 "dma_device_type": 1 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.566 "dma_device_type": 2 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "dma_device_id": "system", 00:10:48.566 "dma_device_type": 1 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.566 "dma_device_type": 2 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "dma_device_id": "system", 00:10:48.566 "dma_device_type": 1 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.566 "dma_device_type": 2 00:10:48.566 } 00:10:48.566 ], 00:10:48.566 "driver_specific": { 00:10:48.566 "raid": { 00:10:48.566 "uuid": "823a00b4-4d15-4ef2-913d-8a764d141c75", 00:10:48.566 "strip_size_kb": 64, 00:10:48.566 "state": "online", 00:10:48.566 "raid_level": "raid0", 00:10:48.566 "superblock": false, 00:10:48.566 "num_base_bdevs": 3, 00:10:48.566 "num_base_bdevs_discovered": 3, 00:10:48.566 "num_base_bdevs_operational": 3, 00:10:48.566 "base_bdevs_list": [ 00:10:48.566 { 00:10:48.566 "name": "NewBaseBdev", 00:10:48.566 "uuid": "7771da14-abca-4bd6-836a-0a9304f94617", 00:10:48.566 "is_configured": true, 00:10:48.566 "data_offset": 0, 00:10:48.566 "data_size": 65536 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "name": "BaseBdev2", 00:10:48.566 "uuid": "caad8d14-21cd-487b-8567-3697517fdfcb", 00:10:48.566 "is_configured": true, 00:10:48.566 "data_offset": 0, 00:10:48.566 "data_size": 65536 00:10:48.566 }, 00:10:48.566 { 00:10:48.566 "name": "BaseBdev3", 00:10:48.566 "uuid": "696d6c9f-fd82-471c-8323-e15f054696e7", 00:10:48.566 "is_configured": true, 00:10:48.566 "data_offset": 0, 00:10:48.566 "data_size": 65536 00:10:48.566 } 00:10:48.566 ] 00:10:48.566 } 00:10:48.566 } 00:10:48.566 }' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.566 BaseBdev2 00:10:48.566 BaseBdev3' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.566 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.826 [2024-09-27 22:27:44.511182] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.826 [2024-09-27 22:27:44.511219] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.826 [2024-09-27 22:27:44.511313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.826 [2024-09-27 22:27:44.511371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.826 [2024-09-27 22:27:44.511386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64286 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 64286 ']' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 64286 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64286 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.826 killing process with pid 64286 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64286' 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 64286 00:10:48.826 [2024-09-27 22:27:44.566143] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.826 22:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 64286 00:10:49.086 [2024-09-27 22:27:44.885943] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.623 22:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:51.623 00:10:51.623 real 0m11.745s 00:10:51.623 user 0m17.755s 00:10:51.623 sys 0m2.164s 00:10:51.623 22:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.623 ************************************ 00:10:51.623 END TEST raid_state_function_test 00:10:51.623 ************************************ 00:10:51.623 22:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.623 22:27:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:51.623 22:27:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:51.623 22:27:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.623 22:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.623 ************************************ 00:10:51.623 START TEST raid_state_function_test_sb 00:10:51.623 ************************************ 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64924 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64924' 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.623 Process raid pid: 64924 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64924 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64924 ']' 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.623 22:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.623 [2024-09-27 22:27:47.156146] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:10:51.623 [2024-09-27 22:27:47.156560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.623 [2024-09-27 22:27:47.323604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.883 [2024-09-27 22:27:47.567320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.141 [2024-09-27 22:27:47.815198] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.141 [2024-09-27 22:27:47.815238] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 [2024-09-27 22:27:48.319012] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.708 [2024-09-27 22:27:48.319258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.708 [2024-09-27 22:27:48.319384] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.708 [2024-09-27 22:27:48.319439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.708 [2024-09-27 22:27:48.319521] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.708 [2024-09-27 22:27:48.319544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.708 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.709 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.709 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.709 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.709 "name": "Existed_Raid", 00:10:52.709 "uuid": "3207e21f-9608-4b28-ba59-938e6689465a", 00:10:52.709 "strip_size_kb": 64, 00:10:52.709 "state": "configuring", 00:10:52.709 "raid_level": "raid0", 00:10:52.709 "superblock": true, 00:10:52.709 "num_base_bdevs": 3, 00:10:52.709 "num_base_bdevs_discovered": 0, 00:10:52.709 "num_base_bdevs_operational": 3, 00:10:52.709 "base_bdevs_list": [ 00:10:52.709 { 00:10:52.709 "name": "BaseBdev1", 00:10:52.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.709 "is_configured": false, 00:10:52.709 "data_offset": 0, 00:10:52.709 "data_size": 0 00:10:52.709 }, 00:10:52.709 { 00:10:52.709 "name": "BaseBdev2", 00:10:52.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.709 "is_configured": false, 00:10:52.709 "data_offset": 0, 00:10:52.709 "data_size": 0 00:10:52.709 }, 00:10:52.709 { 00:10:52.709 "name": "BaseBdev3", 00:10:52.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.709 "is_configured": false, 00:10:52.709 "data_offset": 0, 00:10:52.709 "data_size": 0 00:10:52.709 } 00:10:52.709 ] 00:10:52.709 }' 00:10:52.709 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.709 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 [2024-09-27 22:27:48.750256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.968 [2024-09-27 22:27:48.750297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.968 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 [2024-09-27 22:27:48.762273] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.968 [2024-09-27 22:27:48.762329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.968 [2024-09-27 22:27:48.762340] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.968 [2024-09-27 22:27:48.762353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.969 [2024-09-27 22:27:48.762361] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.969 [2024-09-27 22:27:48.762373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.969 [2024-09-27 22:27:48.817512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.969 BaseBdev1 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.969 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.229 [ 00:10:53.229 { 00:10:53.229 "name": "BaseBdev1", 00:10:53.229 "aliases": [ 00:10:53.229 "54d0aadd-defc-4bbf-9b87-9b433e36bf85" 00:10:53.229 ], 00:10:53.229 "product_name": "Malloc disk", 00:10:53.229 "block_size": 512, 00:10:53.229 "num_blocks": 65536, 00:10:53.229 "uuid": "54d0aadd-defc-4bbf-9b87-9b433e36bf85", 00:10:53.229 "assigned_rate_limits": { 00:10:53.229 "rw_ios_per_sec": 0, 00:10:53.229 "rw_mbytes_per_sec": 0, 00:10:53.229 "r_mbytes_per_sec": 0, 00:10:53.229 "w_mbytes_per_sec": 0 00:10:53.229 }, 00:10:53.229 "claimed": true, 00:10:53.229 "claim_type": "exclusive_write", 00:10:53.229 "zoned": false, 00:10:53.229 "supported_io_types": { 00:10:53.229 "read": true, 00:10:53.229 "write": true, 00:10:53.229 "unmap": true, 00:10:53.229 "flush": true, 00:10:53.229 "reset": true, 00:10:53.229 "nvme_admin": false, 00:10:53.229 "nvme_io": false, 00:10:53.229 "nvme_io_md": false, 00:10:53.229 "write_zeroes": true, 00:10:53.229 "zcopy": true, 00:10:53.229 "get_zone_info": false, 00:10:53.229 "zone_management": false, 00:10:53.229 "zone_append": false, 00:10:53.229 "compare": false, 00:10:53.229 "compare_and_write": false, 00:10:53.229 "abort": true, 00:10:53.229 "seek_hole": false, 00:10:53.229 "seek_data": false, 00:10:53.229 "copy": true, 00:10:53.229 "nvme_iov_md": false 00:10:53.229 }, 00:10:53.229 "memory_domains": [ 00:10:53.229 { 00:10:53.229 "dma_device_id": "system", 00:10:53.229 "dma_device_type": 1 00:10:53.229 }, 00:10:53.229 { 00:10:53.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.229 "dma_device_type": 2 00:10:53.229 } 00:10:53.229 ], 00:10:53.229 "driver_specific": {} 00:10:53.229 } 00:10:53.229 ] 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.229 "name": "Existed_Raid", 00:10:53.229 "uuid": "4044e167-aa23-4898-98f9-8fed1921b95b", 00:10:53.229 "strip_size_kb": 64, 00:10:53.229 "state": "configuring", 00:10:53.229 "raid_level": "raid0", 00:10:53.229 "superblock": true, 00:10:53.229 "num_base_bdevs": 3, 00:10:53.229 "num_base_bdevs_discovered": 1, 00:10:53.229 "num_base_bdevs_operational": 3, 00:10:53.229 "base_bdevs_list": [ 00:10:53.229 { 00:10:53.229 "name": "BaseBdev1", 00:10:53.229 "uuid": "54d0aadd-defc-4bbf-9b87-9b433e36bf85", 00:10:53.229 "is_configured": true, 00:10:53.229 "data_offset": 2048, 00:10:53.229 "data_size": 63488 00:10:53.229 }, 00:10:53.229 { 00:10:53.229 "name": "BaseBdev2", 00:10:53.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.229 "is_configured": false, 00:10:53.229 "data_offset": 0, 00:10:53.229 "data_size": 0 00:10:53.229 }, 00:10:53.229 { 00:10:53.229 "name": "BaseBdev3", 00:10:53.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.229 "is_configured": false, 00:10:53.229 "data_offset": 0, 00:10:53.229 "data_size": 0 00:10:53.229 } 00:10:53.229 ] 00:10:53.229 }' 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.229 22:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.488 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.488 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.488 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.488 [2024-09-27 22:27:49.344842] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.488 [2024-09-27 22:27:49.344905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:53.488 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.489 [2024-09-27 22:27:49.356913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.489 [2024-09-27 22:27:49.359318] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.489 [2024-09-27 22:27:49.359489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.489 [2024-09-27 22:27:49.359580] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.489 [2024-09-27 22:27:49.359662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.489 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.748 "name": "Existed_Raid", 00:10:53.748 "uuid": "38fd966e-b2d0-4047-b08a-04b881619628", 00:10:53.748 "strip_size_kb": 64, 00:10:53.748 "state": "configuring", 00:10:53.748 "raid_level": "raid0", 00:10:53.748 "superblock": true, 00:10:53.748 "num_base_bdevs": 3, 00:10:53.748 "num_base_bdevs_discovered": 1, 00:10:53.748 "num_base_bdevs_operational": 3, 00:10:53.748 "base_bdevs_list": [ 00:10:53.748 { 00:10:53.748 "name": "BaseBdev1", 00:10:53.748 "uuid": "54d0aadd-defc-4bbf-9b87-9b433e36bf85", 00:10:53.748 "is_configured": true, 00:10:53.748 "data_offset": 2048, 00:10:53.748 "data_size": 63488 00:10:53.748 }, 00:10:53.748 { 00:10:53.748 "name": "BaseBdev2", 00:10:53.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.748 "is_configured": false, 00:10:53.748 "data_offset": 0, 00:10:53.748 "data_size": 0 00:10:53.748 }, 00:10:53.748 { 00:10:53.748 "name": "BaseBdev3", 00:10:53.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.748 "is_configured": false, 00:10:53.748 "data_offset": 0, 00:10:53.748 "data_size": 0 00:10:53.748 } 00:10:53.748 ] 00:10:53.748 }' 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.748 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.007 BaseBdev2 00:10:54.007 [2024-09-27 22:27:49.854384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.007 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.007 [ 00:10:54.007 { 00:10:54.007 "name": "BaseBdev2", 00:10:54.007 "aliases": [ 00:10:54.008 "add45960-fa30-45c5-a349-1e5891b72e4d" 00:10:54.008 ], 00:10:54.008 "product_name": "Malloc disk", 00:10:54.008 "block_size": 512, 00:10:54.008 "num_blocks": 65536, 00:10:54.008 "uuid": "add45960-fa30-45c5-a349-1e5891b72e4d", 00:10:54.008 "assigned_rate_limits": { 00:10:54.267 "rw_ios_per_sec": 0, 00:10:54.267 "rw_mbytes_per_sec": 0, 00:10:54.267 "r_mbytes_per_sec": 0, 00:10:54.267 "w_mbytes_per_sec": 0 00:10:54.267 }, 00:10:54.267 "claimed": true, 00:10:54.267 "claim_type": "exclusive_write", 00:10:54.267 "zoned": false, 00:10:54.267 "supported_io_types": { 00:10:54.267 "read": true, 00:10:54.267 "write": true, 00:10:54.267 "unmap": true, 00:10:54.267 "flush": true, 00:10:54.267 "reset": true, 00:10:54.267 "nvme_admin": false, 00:10:54.267 "nvme_io": false, 00:10:54.267 "nvme_io_md": false, 00:10:54.267 "write_zeroes": true, 00:10:54.267 "zcopy": true, 00:10:54.267 "get_zone_info": false, 00:10:54.267 "zone_management": false, 00:10:54.267 "zone_append": false, 00:10:54.267 "compare": false, 00:10:54.267 "compare_and_write": false, 00:10:54.267 "abort": true, 00:10:54.267 "seek_hole": false, 00:10:54.267 "seek_data": false, 00:10:54.267 "copy": true, 00:10:54.267 "nvme_iov_md": false 00:10:54.267 }, 00:10:54.267 "memory_domains": [ 00:10:54.267 { 00:10:54.267 "dma_device_id": "system", 00:10:54.267 "dma_device_type": 1 00:10:54.267 }, 00:10:54.267 { 00:10:54.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.267 "dma_device_type": 2 00:10:54.267 } 00:10:54.267 ], 00:10:54.267 "driver_specific": {} 00:10:54.267 } 00:10:54.267 ] 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.267 "name": "Existed_Raid", 00:10:54.267 "uuid": "38fd966e-b2d0-4047-b08a-04b881619628", 00:10:54.267 "strip_size_kb": 64, 00:10:54.267 "state": "configuring", 00:10:54.267 "raid_level": "raid0", 00:10:54.267 "superblock": true, 00:10:54.267 "num_base_bdevs": 3, 00:10:54.267 "num_base_bdevs_discovered": 2, 00:10:54.267 "num_base_bdevs_operational": 3, 00:10:54.267 "base_bdevs_list": [ 00:10:54.267 { 00:10:54.267 "name": "BaseBdev1", 00:10:54.267 "uuid": "54d0aadd-defc-4bbf-9b87-9b433e36bf85", 00:10:54.267 "is_configured": true, 00:10:54.267 "data_offset": 2048, 00:10:54.267 "data_size": 63488 00:10:54.267 }, 00:10:54.267 { 00:10:54.267 "name": "BaseBdev2", 00:10:54.267 "uuid": "add45960-fa30-45c5-a349-1e5891b72e4d", 00:10:54.267 "is_configured": true, 00:10:54.267 "data_offset": 2048, 00:10:54.267 "data_size": 63488 00:10:54.267 }, 00:10:54.267 { 00:10:54.267 "name": "BaseBdev3", 00:10:54.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.267 "is_configured": false, 00:10:54.267 "data_offset": 0, 00:10:54.267 "data_size": 0 00:10:54.267 } 00:10:54.267 ] 00:10:54.267 }' 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.267 22:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.527 [2024-09-27 22:27:50.350379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.527 [2024-09-27 22:27:50.350880] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:54.527 [2024-09-27 22:27:50.350913] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:54.527 [2024-09-27 22:27:50.351262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:54.527 [2024-09-27 22:27:50.351432] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:54.527 [2024-09-27 22:27:50.351444] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:54.527 BaseBdev3 00:10:54.527 [2024-09-27 22:27:50.351610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.527 [ 00:10:54.527 { 00:10:54.527 "name": "BaseBdev3", 00:10:54.527 "aliases": [ 00:10:54.527 "26d563f5-31c1-44fc-b974-04739bc6d8e9" 00:10:54.527 ], 00:10:54.527 "product_name": "Malloc disk", 00:10:54.527 "block_size": 512, 00:10:54.527 "num_blocks": 65536, 00:10:54.527 "uuid": "26d563f5-31c1-44fc-b974-04739bc6d8e9", 00:10:54.527 "assigned_rate_limits": { 00:10:54.527 "rw_ios_per_sec": 0, 00:10:54.527 "rw_mbytes_per_sec": 0, 00:10:54.527 "r_mbytes_per_sec": 0, 00:10:54.527 "w_mbytes_per_sec": 0 00:10:54.527 }, 00:10:54.527 "claimed": true, 00:10:54.527 "claim_type": "exclusive_write", 00:10:54.527 "zoned": false, 00:10:54.527 "supported_io_types": { 00:10:54.527 "read": true, 00:10:54.527 "write": true, 00:10:54.527 "unmap": true, 00:10:54.527 "flush": true, 00:10:54.527 "reset": true, 00:10:54.527 "nvme_admin": false, 00:10:54.527 "nvme_io": false, 00:10:54.527 "nvme_io_md": false, 00:10:54.527 "write_zeroes": true, 00:10:54.527 "zcopy": true, 00:10:54.527 "get_zone_info": false, 00:10:54.527 "zone_management": false, 00:10:54.527 "zone_append": false, 00:10:54.527 "compare": false, 00:10:54.527 "compare_and_write": false, 00:10:54.527 "abort": true, 00:10:54.527 "seek_hole": false, 00:10:54.527 "seek_data": false, 00:10:54.527 "copy": true, 00:10:54.527 "nvme_iov_md": false 00:10:54.527 }, 00:10:54.527 "memory_domains": [ 00:10:54.527 { 00:10:54.527 "dma_device_id": "system", 00:10:54.527 "dma_device_type": 1 00:10:54.527 }, 00:10:54.527 { 00:10:54.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.527 "dma_device_type": 2 00:10:54.527 } 00:10:54.527 ], 00:10:54.527 "driver_specific": {} 00:10:54.527 } 00:10:54.527 ] 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.527 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.788 "name": "Existed_Raid", 00:10:54.788 "uuid": "38fd966e-b2d0-4047-b08a-04b881619628", 00:10:54.788 "strip_size_kb": 64, 00:10:54.788 "state": "online", 00:10:54.788 "raid_level": "raid0", 00:10:54.788 "superblock": true, 00:10:54.788 "num_base_bdevs": 3, 00:10:54.788 "num_base_bdevs_discovered": 3, 00:10:54.788 "num_base_bdevs_operational": 3, 00:10:54.788 "base_bdevs_list": [ 00:10:54.788 { 00:10:54.788 "name": "BaseBdev1", 00:10:54.788 "uuid": "54d0aadd-defc-4bbf-9b87-9b433e36bf85", 00:10:54.788 "is_configured": true, 00:10:54.788 "data_offset": 2048, 00:10:54.788 "data_size": 63488 00:10:54.788 }, 00:10:54.788 { 00:10:54.788 "name": "BaseBdev2", 00:10:54.788 "uuid": "add45960-fa30-45c5-a349-1e5891b72e4d", 00:10:54.788 "is_configured": true, 00:10:54.788 "data_offset": 2048, 00:10:54.788 "data_size": 63488 00:10:54.788 }, 00:10:54.788 { 00:10:54.788 "name": "BaseBdev3", 00:10:54.788 "uuid": "26d563f5-31c1-44fc-b974-04739bc6d8e9", 00:10:54.788 "is_configured": true, 00:10:54.788 "data_offset": 2048, 00:10:54.788 "data_size": 63488 00:10:54.788 } 00:10:54.788 ] 00:10:54.788 }' 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.788 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.048 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.048 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.048 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.049 [2024-09-27 22:27:50.830232] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.049 "name": "Existed_Raid", 00:10:55.049 "aliases": [ 00:10:55.049 "38fd966e-b2d0-4047-b08a-04b881619628" 00:10:55.049 ], 00:10:55.049 "product_name": "Raid Volume", 00:10:55.049 "block_size": 512, 00:10:55.049 "num_blocks": 190464, 00:10:55.049 "uuid": "38fd966e-b2d0-4047-b08a-04b881619628", 00:10:55.049 "assigned_rate_limits": { 00:10:55.049 "rw_ios_per_sec": 0, 00:10:55.049 "rw_mbytes_per_sec": 0, 00:10:55.049 "r_mbytes_per_sec": 0, 00:10:55.049 "w_mbytes_per_sec": 0 00:10:55.049 }, 00:10:55.049 "claimed": false, 00:10:55.049 "zoned": false, 00:10:55.049 "supported_io_types": { 00:10:55.049 "read": true, 00:10:55.049 "write": true, 00:10:55.049 "unmap": true, 00:10:55.049 "flush": true, 00:10:55.049 "reset": true, 00:10:55.049 "nvme_admin": false, 00:10:55.049 "nvme_io": false, 00:10:55.049 "nvme_io_md": false, 00:10:55.049 "write_zeroes": true, 00:10:55.049 "zcopy": false, 00:10:55.049 "get_zone_info": false, 00:10:55.049 "zone_management": false, 00:10:55.049 "zone_append": false, 00:10:55.049 "compare": false, 00:10:55.049 "compare_and_write": false, 00:10:55.049 "abort": false, 00:10:55.049 "seek_hole": false, 00:10:55.049 "seek_data": false, 00:10:55.049 "copy": false, 00:10:55.049 "nvme_iov_md": false 00:10:55.049 }, 00:10:55.049 "memory_domains": [ 00:10:55.049 { 00:10:55.049 "dma_device_id": "system", 00:10:55.049 "dma_device_type": 1 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.049 "dma_device_type": 2 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "dma_device_id": "system", 00:10:55.049 "dma_device_type": 1 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.049 "dma_device_type": 2 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "dma_device_id": "system", 00:10:55.049 "dma_device_type": 1 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.049 "dma_device_type": 2 00:10:55.049 } 00:10:55.049 ], 00:10:55.049 "driver_specific": { 00:10:55.049 "raid": { 00:10:55.049 "uuid": "38fd966e-b2d0-4047-b08a-04b881619628", 00:10:55.049 "strip_size_kb": 64, 00:10:55.049 "state": "online", 00:10:55.049 "raid_level": "raid0", 00:10:55.049 "superblock": true, 00:10:55.049 "num_base_bdevs": 3, 00:10:55.049 "num_base_bdevs_discovered": 3, 00:10:55.049 "num_base_bdevs_operational": 3, 00:10:55.049 "base_bdevs_list": [ 00:10:55.049 { 00:10:55.049 "name": "BaseBdev1", 00:10:55.049 "uuid": "54d0aadd-defc-4bbf-9b87-9b433e36bf85", 00:10:55.049 "is_configured": true, 00:10:55.049 "data_offset": 2048, 00:10:55.049 "data_size": 63488 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "name": "BaseBdev2", 00:10:55.049 "uuid": "add45960-fa30-45c5-a349-1e5891b72e4d", 00:10:55.049 "is_configured": true, 00:10:55.049 "data_offset": 2048, 00:10:55.049 "data_size": 63488 00:10:55.049 }, 00:10:55.049 { 00:10:55.049 "name": "BaseBdev3", 00:10:55.049 "uuid": "26d563f5-31c1-44fc-b974-04739bc6d8e9", 00:10:55.049 "is_configured": true, 00:10:55.049 "data_offset": 2048, 00:10:55.049 "data_size": 63488 00:10:55.049 } 00:10:55.049 ] 00:10:55.049 } 00:10:55.049 } 00:10:55.049 }' 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:55.049 BaseBdev2 00:10:55.049 BaseBdev3' 00:10:55.049 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 22:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.309 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.309 [2024-09-27 22:27:51.113586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.309 [2024-09-27 22:27:51.113832] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.309 [2024-09-27 22:27:51.114134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.569 "name": "Existed_Raid", 00:10:55.569 "uuid": "38fd966e-b2d0-4047-b08a-04b881619628", 00:10:55.569 "strip_size_kb": 64, 00:10:55.569 "state": "offline", 00:10:55.569 "raid_level": "raid0", 00:10:55.569 "superblock": true, 00:10:55.569 "num_base_bdevs": 3, 00:10:55.569 "num_base_bdevs_discovered": 2, 00:10:55.569 "num_base_bdevs_operational": 2, 00:10:55.569 "base_bdevs_list": [ 00:10:55.569 { 00:10:55.569 "name": null, 00:10:55.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.569 "is_configured": false, 00:10:55.569 "data_offset": 0, 00:10:55.569 "data_size": 63488 00:10:55.569 }, 00:10:55.569 { 00:10:55.569 "name": "BaseBdev2", 00:10:55.569 "uuid": "add45960-fa30-45c5-a349-1e5891b72e4d", 00:10:55.569 "is_configured": true, 00:10:55.569 "data_offset": 2048, 00:10:55.569 "data_size": 63488 00:10:55.569 }, 00:10:55.569 { 00:10:55.569 "name": "BaseBdev3", 00:10:55.569 "uuid": "26d563f5-31c1-44fc-b974-04739bc6d8e9", 00:10:55.569 "is_configured": true, 00:10:55.569 "data_offset": 2048, 00:10:55.569 "data_size": 63488 00:10:55.569 } 00:10:55.569 ] 00:10:55.569 }' 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.569 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.828 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.828 [2024-09-27 22:27:51.699501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.087 [2024-09-27 22:27:51.849132] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.087 [2024-09-27 22:27:51.849346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.087 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 22:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 BaseBdev2 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 [ 00:10:56.347 { 00:10:56.347 "name": "BaseBdev2", 00:10:56.347 "aliases": [ 00:10:56.347 "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed" 00:10:56.347 ], 00:10:56.347 "product_name": "Malloc disk", 00:10:56.347 "block_size": 512, 00:10:56.347 "num_blocks": 65536, 00:10:56.347 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:56.347 "assigned_rate_limits": { 00:10:56.347 "rw_ios_per_sec": 0, 00:10:56.347 "rw_mbytes_per_sec": 0, 00:10:56.347 "r_mbytes_per_sec": 0, 00:10:56.347 "w_mbytes_per_sec": 0 00:10:56.347 }, 00:10:56.347 "claimed": false, 00:10:56.347 "zoned": false, 00:10:56.347 "supported_io_types": { 00:10:56.347 "read": true, 00:10:56.347 "write": true, 00:10:56.347 "unmap": true, 00:10:56.347 "flush": true, 00:10:56.347 "reset": true, 00:10:56.347 "nvme_admin": false, 00:10:56.347 "nvme_io": false, 00:10:56.347 "nvme_io_md": false, 00:10:56.347 "write_zeroes": true, 00:10:56.347 "zcopy": true, 00:10:56.347 "get_zone_info": false, 00:10:56.347 "zone_management": false, 00:10:56.347 "zone_append": false, 00:10:56.347 "compare": false, 00:10:56.347 "compare_and_write": false, 00:10:56.347 "abort": true, 00:10:56.347 "seek_hole": false, 00:10:56.347 "seek_data": false, 00:10:56.347 "copy": true, 00:10:56.347 "nvme_iov_md": false 00:10:56.347 }, 00:10:56.347 "memory_domains": [ 00:10:56.347 { 00:10:56.347 "dma_device_id": "system", 00:10:56.347 "dma_device_type": 1 00:10:56.347 }, 00:10:56.347 { 00:10:56.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.347 "dma_device_type": 2 00:10:56.347 } 00:10:56.347 ], 00:10:56.347 "driver_specific": {} 00:10:56.347 } 00:10:56.347 ] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 BaseBdev3 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.347 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.347 [ 00:10:56.347 { 00:10:56.347 "name": "BaseBdev3", 00:10:56.347 "aliases": [ 00:10:56.347 "7c3872e9-032b-4a8b-b289-bb06db6dc336" 00:10:56.347 ], 00:10:56.347 "product_name": "Malloc disk", 00:10:56.347 "block_size": 512, 00:10:56.347 "num_blocks": 65536, 00:10:56.347 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:56.347 "assigned_rate_limits": { 00:10:56.347 "rw_ios_per_sec": 0, 00:10:56.347 "rw_mbytes_per_sec": 0, 00:10:56.347 "r_mbytes_per_sec": 0, 00:10:56.347 "w_mbytes_per_sec": 0 00:10:56.347 }, 00:10:56.347 "claimed": false, 00:10:56.347 "zoned": false, 00:10:56.347 "supported_io_types": { 00:10:56.347 "read": true, 00:10:56.347 "write": true, 00:10:56.347 "unmap": true, 00:10:56.347 "flush": true, 00:10:56.347 "reset": true, 00:10:56.347 "nvme_admin": false, 00:10:56.347 "nvme_io": false, 00:10:56.347 "nvme_io_md": false, 00:10:56.347 "write_zeroes": true, 00:10:56.347 "zcopy": true, 00:10:56.347 "get_zone_info": false, 00:10:56.348 "zone_management": false, 00:10:56.348 "zone_append": false, 00:10:56.348 "compare": false, 00:10:56.348 "compare_and_write": false, 00:10:56.348 "abort": true, 00:10:56.348 "seek_hole": false, 00:10:56.348 "seek_data": false, 00:10:56.348 "copy": true, 00:10:56.348 "nvme_iov_md": false 00:10:56.348 }, 00:10:56.348 "memory_domains": [ 00:10:56.348 { 00:10:56.348 "dma_device_id": "system", 00:10:56.348 "dma_device_type": 1 00:10:56.348 }, 00:10:56.348 { 00:10:56.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.348 "dma_device_type": 2 00:10:56.348 } 00:10:56.348 ], 00:10:56.348 "driver_specific": {} 00:10:56.348 } 00:10:56.348 ] 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.348 [2024-09-27 22:27:52.198354] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.348 [2024-09-27 22:27:52.198408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.348 [2024-09-27 22:27:52.198439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.348 [2024-09-27 22:27:52.201314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.348 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.606 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.606 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.606 "name": "Existed_Raid", 00:10:56.606 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:56.606 "strip_size_kb": 64, 00:10:56.606 "state": "configuring", 00:10:56.606 "raid_level": "raid0", 00:10:56.606 "superblock": true, 00:10:56.606 "num_base_bdevs": 3, 00:10:56.606 "num_base_bdevs_discovered": 2, 00:10:56.606 "num_base_bdevs_operational": 3, 00:10:56.606 "base_bdevs_list": [ 00:10:56.606 { 00:10:56.606 "name": "BaseBdev1", 00:10:56.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.606 "is_configured": false, 00:10:56.606 "data_offset": 0, 00:10:56.606 "data_size": 0 00:10:56.606 }, 00:10:56.606 { 00:10:56.606 "name": "BaseBdev2", 00:10:56.606 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:56.606 "is_configured": true, 00:10:56.606 "data_offset": 2048, 00:10:56.606 "data_size": 63488 00:10:56.606 }, 00:10:56.606 { 00:10:56.606 "name": "BaseBdev3", 00:10:56.606 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:56.606 "is_configured": true, 00:10:56.606 "data_offset": 2048, 00:10:56.606 "data_size": 63488 00:10:56.606 } 00:10:56.606 ] 00:10:56.606 }' 00:10:56.606 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.606 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.864 [2024-09-27 22:27:52.601758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.864 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.865 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.865 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.865 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.865 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.865 "name": "Existed_Raid", 00:10:56.865 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:56.865 "strip_size_kb": 64, 00:10:56.865 "state": "configuring", 00:10:56.865 "raid_level": "raid0", 00:10:56.865 "superblock": true, 00:10:56.865 "num_base_bdevs": 3, 00:10:56.865 "num_base_bdevs_discovered": 1, 00:10:56.865 "num_base_bdevs_operational": 3, 00:10:56.865 "base_bdevs_list": [ 00:10:56.865 { 00:10:56.865 "name": "BaseBdev1", 00:10:56.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.865 "is_configured": false, 00:10:56.865 "data_offset": 0, 00:10:56.865 "data_size": 0 00:10:56.865 }, 00:10:56.865 { 00:10:56.865 "name": null, 00:10:56.865 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:56.865 "is_configured": false, 00:10:56.865 "data_offset": 0, 00:10:56.865 "data_size": 63488 00:10:56.865 }, 00:10:56.865 { 00:10:56.865 "name": "BaseBdev3", 00:10:56.865 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:56.865 "is_configured": true, 00:10:56.865 "data_offset": 2048, 00:10:56.865 "data_size": 63488 00:10:56.865 } 00:10:56.865 ] 00:10:56.865 }' 00:10:56.865 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.865 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.122 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.122 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.122 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.122 22:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:57.383 22:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.383 [2024-09-27 22:27:53.076507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.383 BaseBdev1 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.383 [ 00:10:57.383 { 00:10:57.383 "name": "BaseBdev1", 00:10:57.383 "aliases": [ 00:10:57.383 "e6996369-671f-49fe-a0c0-d1b60ba9eaab" 00:10:57.383 ], 00:10:57.383 "product_name": "Malloc disk", 00:10:57.383 "block_size": 512, 00:10:57.383 "num_blocks": 65536, 00:10:57.383 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:10:57.383 "assigned_rate_limits": { 00:10:57.383 "rw_ios_per_sec": 0, 00:10:57.383 "rw_mbytes_per_sec": 0, 00:10:57.383 "r_mbytes_per_sec": 0, 00:10:57.383 "w_mbytes_per_sec": 0 00:10:57.383 }, 00:10:57.383 "claimed": true, 00:10:57.383 "claim_type": "exclusive_write", 00:10:57.383 "zoned": false, 00:10:57.383 "supported_io_types": { 00:10:57.383 "read": true, 00:10:57.383 "write": true, 00:10:57.383 "unmap": true, 00:10:57.383 "flush": true, 00:10:57.383 "reset": true, 00:10:57.383 "nvme_admin": false, 00:10:57.383 "nvme_io": false, 00:10:57.383 "nvme_io_md": false, 00:10:57.383 "write_zeroes": true, 00:10:57.383 "zcopy": true, 00:10:57.383 "get_zone_info": false, 00:10:57.383 "zone_management": false, 00:10:57.383 "zone_append": false, 00:10:57.383 "compare": false, 00:10:57.383 "compare_and_write": false, 00:10:57.383 "abort": true, 00:10:57.383 "seek_hole": false, 00:10:57.383 "seek_data": false, 00:10:57.383 "copy": true, 00:10:57.383 "nvme_iov_md": false 00:10:57.383 }, 00:10:57.383 "memory_domains": [ 00:10:57.383 { 00:10:57.383 "dma_device_id": "system", 00:10:57.383 "dma_device_type": 1 00:10:57.383 }, 00:10:57.383 { 00:10:57.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.383 "dma_device_type": 2 00:10:57.383 } 00:10:57.383 ], 00:10:57.383 "driver_specific": {} 00:10:57.383 } 00:10:57.383 ] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.383 "name": "Existed_Raid", 00:10:57.383 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:57.383 "strip_size_kb": 64, 00:10:57.383 "state": "configuring", 00:10:57.383 "raid_level": "raid0", 00:10:57.383 "superblock": true, 00:10:57.383 "num_base_bdevs": 3, 00:10:57.383 "num_base_bdevs_discovered": 2, 00:10:57.383 "num_base_bdevs_operational": 3, 00:10:57.383 "base_bdevs_list": [ 00:10:57.383 { 00:10:57.383 "name": "BaseBdev1", 00:10:57.383 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:10:57.383 "is_configured": true, 00:10:57.383 "data_offset": 2048, 00:10:57.383 "data_size": 63488 00:10:57.383 }, 00:10:57.383 { 00:10:57.383 "name": null, 00:10:57.383 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:57.383 "is_configured": false, 00:10:57.383 "data_offset": 0, 00:10:57.383 "data_size": 63488 00:10:57.383 }, 00:10:57.383 { 00:10:57.383 "name": "BaseBdev3", 00:10:57.383 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:57.383 "is_configured": true, 00:10:57.383 "data_offset": 2048, 00:10:57.383 "data_size": 63488 00:10:57.383 } 00:10:57.383 ] 00:10:57.383 }' 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.383 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.950 [2024-09-27 22:27:53.596150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.950 "name": "Existed_Raid", 00:10:57.950 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:57.950 "strip_size_kb": 64, 00:10:57.950 "state": "configuring", 00:10:57.950 "raid_level": "raid0", 00:10:57.950 "superblock": true, 00:10:57.950 "num_base_bdevs": 3, 00:10:57.950 "num_base_bdevs_discovered": 1, 00:10:57.950 "num_base_bdevs_operational": 3, 00:10:57.950 "base_bdevs_list": [ 00:10:57.950 { 00:10:57.950 "name": "BaseBdev1", 00:10:57.950 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:10:57.950 "is_configured": true, 00:10:57.950 "data_offset": 2048, 00:10:57.950 "data_size": 63488 00:10:57.950 }, 00:10:57.950 { 00:10:57.950 "name": null, 00:10:57.950 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:57.950 "is_configured": false, 00:10:57.950 "data_offset": 0, 00:10:57.950 "data_size": 63488 00:10:57.950 }, 00:10:57.950 { 00:10:57.950 "name": null, 00:10:57.950 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:57.950 "is_configured": false, 00:10:57.950 "data_offset": 0, 00:10:57.950 "data_size": 63488 00:10:57.950 } 00:10:57.950 ] 00:10:57.950 }' 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.950 22:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.237 [2024-09-27 22:27:54.087660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.237 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.497 "name": "Existed_Raid", 00:10:58.497 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:58.497 "strip_size_kb": 64, 00:10:58.497 "state": "configuring", 00:10:58.497 "raid_level": "raid0", 00:10:58.497 "superblock": true, 00:10:58.497 "num_base_bdevs": 3, 00:10:58.497 "num_base_bdevs_discovered": 2, 00:10:58.497 "num_base_bdevs_operational": 3, 00:10:58.497 "base_bdevs_list": [ 00:10:58.497 { 00:10:58.497 "name": "BaseBdev1", 00:10:58.497 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:10:58.497 "is_configured": true, 00:10:58.497 "data_offset": 2048, 00:10:58.497 "data_size": 63488 00:10:58.497 }, 00:10:58.497 { 00:10:58.497 "name": null, 00:10:58.497 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:58.497 "is_configured": false, 00:10:58.497 "data_offset": 0, 00:10:58.497 "data_size": 63488 00:10:58.497 }, 00:10:58.497 { 00:10:58.497 "name": "BaseBdev3", 00:10:58.497 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:58.497 "is_configured": true, 00:10:58.497 "data_offset": 2048, 00:10:58.497 "data_size": 63488 00:10:58.497 } 00:10:58.497 ] 00:10:58.497 }' 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.497 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.756 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.756 [2024-09-27 22:27:54.615289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.014 "name": "Existed_Raid", 00:10:59.014 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:59.014 "strip_size_kb": 64, 00:10:59.014 "state": "configuring", 00:10:59.014 "raid_level": "raid0", 00:10:59.014 "superblock": true, 00:10:59.014 "num_base_bdevs": 3, 00:10:59.014 "num_base_bdevs_discovered": 1, 00:10:59.014 "num_base_bdevs_operational": 3, 00:10:59.014 "base_bdevs_list": [ 00:10:59.014 { 00:10:59.014 "name": null, 00:10:59.014 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:10:59.014 "is_configured": false, 00:10:59.014 "data_offset": 0, 00:10:59.014 "data_size": 63488 00:10:59.014 }, 00:10:59.014 { 00:10:59.014 "name": null, 00:10:59.014 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:59.014 "is_configured": false, 00:10:59.014 "data_offset": 0, 00:10:59.014 "data_size": 63488 00:10:59.014 }, 00:10:59.014 { 00:10:59.014 "name": "BaseBdev3", 00:10:59.014 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:59.014 "is_configured": true, 00:10:59.014 "data_offset": 2048, 00:10:59.014 "data_size": 63488 00:10:59.014 } 00:10:59.014 ] 00:10:59.014 }' 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.014 22:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.582 [2024-09-27 22:27:55.202150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.582 "name": "Existed_Raid", 00:10:59.582 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:10:59.582 "strip_size_kb": 64, 00:10:59.582 "state": "configuring", 00:10:59.582 "raid_level": "raid0", 00:10:59.582 "superblock": true, 00:10:59.582 "num_base_bdevs": 3, 00:10:59.582 "num_base_bdevs_discovered": 2, 00:10:59.582 "num_base_bdevs_operational": 3, 00:10:59.582 "base_bdevs_list": [ 00:10:59.582 { 00:10:59.582 "name": null, 00:10:59.582 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:10:59.582 "is_configured": false, 00:10:59.582 "data_offset": 0, 00:10:59.582 "data_size": 63488 00:10:59.582 }, 00:10:59.582 { 00:10:59.582 "name": "BaseBdev2", 00:10:59.582 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:10:59.582 "is_configured": true, 00:10:59.582 "data_offset": 2048, 00:10:59.582 "data_size": 63488 00:10:59.582 }, 00:10:59.582 { 00:10:59.582 "name": "BaseBdev3", 00:10:59.582 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:10:59.582 "is_configured": true, 00:10:59.582 "data_offset": 2048, 00:10:59.582 "data_size": 63488 00:10:59.582 } 00:10:59.582 ] 00:10:59.582 }' 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.582 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.841 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.841 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.841 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.841 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.841 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.841 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e6996369-671f-49fe-a0c0-d1b60ba9eaab 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 [2024-09-27 22:27:55.816574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.100 NewBaseBdev 00:11:00.100 [2024-09-27 22:27:55.817075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.100 [2024-09-27 22:27:55.817104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.100 [2024-09-27 22:27:55.817396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:00.100 [2024-09-27 22:27:55.817543] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.100 [2024-09-27 22:27:55.817553] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:00.100 [2024-09-27 22:27:55.817690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 [ 00:11:00.100 { 00:11:00.100 "name": "NewBaseBdev", 00:11:00.100 "aliases": [ 00:11:00.100 "e6996369-671f-49fe-a0c0-d1b60ba9eaab" 00:11:00.100 ], 00:11:00.100 "product_name": "Malloc disk", 00:11:00.100 "block_size": 512, 00:11:00.100 "num_blocks": 65536, 00:11:00.100 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:11:00.100 "assigned_rate_limits": { 00:11:00.100 "rw_ios_per_sec": 0, 00:11:00.100 "rw_mbytes_per_sec": 0, 00:11:00.100 "r_mbytes_per_sec": 0, 00:11:00.100 "w_mbytes_per_sec": 0 00:11:00.100 }, 00:11:00.100 "claimed": true, 00:11:00.100 "claim_type": "exclusive_write", 00:11:00.100 "zoned": false, 00:11:00.100 "supported_io_types": { 00:11:00.100 "read": true, 00:11:00.100 "write": true, 00:11:00.100 "unmap": true, 00:11:00.100 "flush": true, 00:11:00.100 "reset": true, 00:11:00.100 "nvme_admin": false, 00:11:00.100 "nvme_io": false, 00:11:00.100 "nvme_io_md": false, 00:11:00.100 "write_zeroes": true, 00:11:00.100 "zcopy": true, 00:11:00.100 "get_zone_info": false, 00:11:00.100 "zone_management": false, 00:11:00.100 "zone_append": false, 00:11:00.100 "compare": false, 00:11:00.100 "compare_and_write": false, 00:11:00.100 "abort": true, 00:11:00.100 "seek_hole": false, 00:11:00.100 "seek_data": false, 00:11:00.100 "copy": true, 00:11:00.100 "nvme_iov_md": false 00:11:00.100 }, 00:11:00.100 "memory_domains": [ 00:11:00.100 { 00:11:00.100 "dma_device_id": "system", 00:11:00.100 "dma_device_type": 1 00:11:00.100 }, 00:11:00.100 { 00:11:00.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.100 "dma_device_type": 2 00:11:00.100 } 00:11:00.100 ], 00:11:00.100 "driver_specific": {} 00:11:00.100 } 00:11:00.100 ] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.100 "name": "Existed_Raid", 00:11:00.100 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:11:00.100 "strip_size_kb": 64, 00:11:00.100 "state": "online", 00:11:00.100 "raid_level": "raid0", 00:11:00.100 "superblock": true, 00:11:00.100 "num_base_bdevs": 3, 00:11:00.100 "num_base_bdevs_discovered": 3, 00:11:00.100 "num_base_bdevs_operational": 3, 00:11:00.100 "base_bdevs_list": [ 00:11:00.100 { 00:11:00.100 "name": "NewBaseBdev", 00:11:00.100 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:11:00.100 "is_configured": true, 00:11:00.100 "data_offset": 2048, 00:11:00.100 "data_size": 63488 00:11:00.100 }, 00:11:00.100 { 00:11:00.100 "name": "BaseBdev2", 00:11:00.100 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:11:00.100 "is_configured": true, 00:11:00.100 "data_offset": 2048, 00:11:00.100 "data_size": 63488 00:11:00.100 }, 00:11:00.100 { 00:11:00.100 "name": "BaseBdev3", 00:11:00.100 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:11:00.100 "is_configured": true, 00:11:00.100 "data_offset": 2048, 00:11:00.100 "data_size": 63488 00:11:00.100 } 00:11:00.100 ] 00:11:00.100 }' 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.100 22:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.669 [2024-09-27 22:27:56.276354] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.669 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.669 "name": "Existed_Raid", 00:11:00.669 "aliases": [ 00:11:00.669 "61205403-ca17-47ca-bc76-2f4dd5ad4c0f" 00:11:00.669 ], 00:11:00.669 "product_name": "Raid Volume", 00:11:00.669 "block_size": 512, 00:11:00.669 "num_blocks": 190464, 00:11:00.669 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:11:00.669 "assigned_rate_limits": { 00:11:00.669 "rw_ios_per_sec": 0, 00:11:00.669 "rw_mbytes_per_sec": 0, 00:11:00.669 "r_mbytes_per_sec": 0, 00:11:00.669 "w_mbytes_per_sec": 0 00:11:00.669 }, 00:11:00.669 "claimed": false, 00:11:00.669 "zoned": false, 00:11:00.669 "supported_io_types": { 00:11:00.669 "read": true, 00:11:00.670 "write": true, 00:11:00.670 "unmap": true, 00:11:00.670 "flush": true, 00:11:00.670 "reset": true, 00:11:00.670 "nvme_admin": false, 00:11:00.670 "nvme_io": false, 00:11:00.670 "nvme_io_md": false, 00:11:00.670 "write_zeroes": true, 00:11:00.670 "zcopy": false, 00:11:00.670 "get_zone_info": false, 00:11:00.670 "zone_management": false, 00:11:00.670 "zone_append": false, 00:11:00.670 "compare": false, 00:11:00.670 "compare_and_write": false, 00:11:00.670 "abort": false, 00:11:00.670 "seek_hole": false, 00:11:00.670 "seek_data": false, 00:11:00.670 "copy": false, 00:11:00.670 "nvme_iov_md": false 00:11:00.670 }, 00:11:00.670 "memory_domains": [ 00:11:00.670 { 00:11:00.670 "dma_device_id": "system", 00:11:00.670 "dma_device_type": 1 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.670 "dma_device_type": 2 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "dma_device_id": "system", 00:11:00.670 "dma_device_type": 1 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.670 "dma_device_type": 2 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "dma_device_id": "system", 00:11:00.670 "dma_device_type": 1 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.670 "dma_device_type": 2 00:11:00.670 } 00:11:00.670 ], 00:11:00.670 "driver_specific": { 00:11:00.670 "raid": { 00:11:00.670 "uuid": "61205403-ca17-47ca-bc76-2f4dd5ad4c0f", 00:11:00.670 "strip_size_kb": 64, 00:11:00.670 "state": "online", 00:11:00.670 "raid_level": "raid0", 00:11:00.670 "superblock": true, 00:11:00.670 "num_base_bdevs": 3, 00:11:00.670 "num_base_bdevs_discovered": 3, 00:11:00.670 "num_base_bdevs_operational": 3, 00:11:00.670 "base_bdevs_list": [ 00:11:00.670 { 00:11:00.670 "name": "NewBaseBdev", 00:11:00.670 "uuid": "e6996369-671f-49fe-a0c0-d1b60ba9eaab", 00:11:00.670 "is_configured": true, 00:11:00.670 "data_offset": 2048, 00:11:00.670 "data_size": 63488 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "name": "BaseBdev2", 00:11:00.670 "uuid": "e55d7edd-3c42-4344-a0e5-a5a5f50cd2ed", 00:11:00.670 "is_configured": true, 00:11:00.670 "data_offset": 2048, 00:11:00.670 "data_size": 63488 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "name": "BaseBdev3", 00:11:00.670 "uuid": "7c3872e9-032b-4a8b-b289-bb06db6dc336", 00:11:00.670 "is_configured": true, 00:11:00.670 "data_offset": 2048, 00:11:00.670 "data_size": 63488 00:11:00.670 } 00:11:00.670 ] 00:11:00.670 } 00:11:00.670 } 00:11:00.670 }' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:00.670 BaseBdev2 00:11:00.670 BaseBdev3' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.670 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.929 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.929 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.929 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.929 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.929 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.929 [2024-09-27 22:27:56.571577] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.930 [2024-09-27 22:27:56.571730] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.930 [2024-09-27 22:27:56.571929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.930 [2024-09-27 22:27:56.572075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.930 [2024-09-27 22:27:56.572166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64924 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64924 ']' 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64924 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64924 00:11:00.930 killing process with pid 64924 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64924' 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64924 00:11:00.930 [2024-09-27 22:27:56.612709] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.930 22:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64924 00:11:01.189 [2024-09-27 22:27:56.952251] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.723 22:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.723 00:11:03.723 real 0m12.056s 00:11:03.723 user 0m18.298s 00:11:03.723 sys 0m2.206s 00:11:03.723 22:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.723 22:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.723 ************************************ 00:11:03.723 END TEST raid_state_function_test_sb 00:11:03.723 ************************************ 00:11:03.723 22:27:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:03.723 22:27:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:03.723 22:27:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.723 22:27:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.723 ************************************ 00:11:03.723 START TEST raid_superblock_test 00:11:03.723 ************************************ 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65561 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65561 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65561 ']' 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.723 22:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.723 [2024-09-27 22:27:59.275168] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:03.723 [2024-09-27 22:27:59.275303] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65561 ] 00:11:03.723 [2024-09-27 22:27:59.436469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.982 [2024-09-27 22:27:59.688707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.241 [2024-09-27 22:27:59.946537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.241 [2024-09-27 22:27:59.946573] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.808 malloc1 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.808 [2024-09-27 22:28:00.506992] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.808 [2024-09-27 22:28:00.507056] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.808 [2024-09-27 22:28:00.507089] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:04.808 [2024-09-27 22:28:00.507104] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.808 [2024-09-27 22:28:00.509483] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.808 [2024-09-27 22:28:00.509519] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.808 pt1 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.808 malloc2 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.808 [2024-09-27 22:28:00.572091] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.808 [2024-09-27 22:28:00.572150] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.808 [2024-09-27 22:28:00.572179] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:04.808 [2024-09-27 22:28:00.572191] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.808 [2024-09-27 22:28:00.574675] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.808 [2024-09-27 22:28:00.574712] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.808 pt2 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.808 malloc3 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.808 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.808 [2024-09-27 22:28:00.635344] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.808 [2024-09-27 22:28:00.635397] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.808 [2024-09-27 22:28:00.635422] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:04.808 [2024-09-27 22:28:00.635434] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.808 [2024-09-27 22:28:00.637971] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.809 [2024-09-27 22:28:00.638025] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.809 pt3 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.809 [2024-09-27 22:28:00.647400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.809 [2024-09-27 22:28:00.649586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.809 [2024-09-27 22:28:00.649659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.809 [2024-09-27 22:28:00.649831] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:04.809 [2024-09-27 22:28:00.649846] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:04.809 [2024-09-27 22:28:00.650129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:04.809 [2024-09-27 22:28:00.650318] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:04.809 [2024-09-27 22:28:00.650337] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:04.809 [2024-09-27 22:28:00.650502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.809 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.068 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.068 "name": "raid_bdev1", 00:11:05.068 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:05.068 "strip_size_kb": 64, 00:11:05.068 "state": "online", 00:11:05.068 "raid_level": "raid0", 00:11:05.068 "superblock": true, 00:11:05.068 "num_base_bdevs": 3, 00:11:05.068 "num_base_bdevs_discovered": 3, 00:11:05.068 "num_base_bdevs_operational": 3, 00:11:05.068 "base_bdevs_list": [ 00:11:05.068 { 00:11:05.068 "name": "pt1", 00:11:05.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.068 "is_configured": true, 00:11:05.068 "data_offset": 2048, 00:11:05.068 "data_size": 63488 00:11:05.068 }, 00:11:05.068 { 00:11:05.068 "name": "pt2", 00:11:05.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.068 "is_configured": true, 00:11:05.068 "data_offset": 2048, 00:11:05.068 "data_size": 63488 00:11:05.068 }, 00:11:05.068 { 00:11:05.068 "name": "pt3", 00:11:05.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.068 "is_configured": true, 00:11:05.068 "data_offset": 2048, 00:11:05.068 "data_size": 63488 00:11:05.068 } 00:11:05.068 ] 00:11:05.068 }' 00:11:05.068 22:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.068 22:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.328 [2024-09-27 22:28:01.115293] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.328 "name": "raid_bdev1", 00:11:05.328 "aliases": [ 00:11:05.328 "e2951195-d5c0-4892-b57a-48fc05edb935" 00:11:05.328 ], 00:11:05.328 "product_name": "Raid Volume", 00:11:05.328 "block_size": 512, 00:11:05.328 "num_blocks": 190464, 00:11:05.328 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:05.328 "assigned_rate_limits": { 00:11:05.328 "rw_ios_per_sec": 0, 00:11:05.328 "rw_mbytes_per_sec": 0, 00:11:05.328 "r_mbytes_per_sec": 0, 00:11:05.328 "w_mbytes_per_sec": 0 00:11:05.328 }, 00:11:05.328 "claimed": false, 00:11:05.328 "zoned": false, 00:11:05.328 "supported_io_types": { 00:11:05.328 "read": true, 00:11:05.328 "write": true, 00:11:05.328 "unmap": true, 00:11:05.328 "flush": true, 00:11:05.328 "reset": true, 00:11:05.328 "nvme_admin": false, 00:11:05.328 "nvme_io": false, 00:11:05.328 "nvme_io_md": false, 00:11:05.328 "write_zeroes": true, 00:11:05.328 "zcopy": false, 00:11:05.328 "get_zone_info": false, 00:11:05.328 "zone_management": false, 00:11:05.328 "zone_append": false, 00:11:05.328 "compare": false, 00:11:05.328 "compare_and_write": false, 00:11:05.328 "abort": false, 00:11:05.328 "seek_hole": false, 00:11:05.328 "seek_data": false, 00:11:05.328 "copy": false, 00:11:05.328 "nvme_iov_md": false 00:11:05.328 }, 00:11:05.328 "memory_domains": [ 00:11:05.328 { 00:11:05.328 "dma_device_id": "system", 00:11:05.328 "dma_device_type": 1 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.328 "dma_device_type": 2 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "system", 00:11:05.328 "dma_device_type": 1 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.328 "dma_device_type": 2 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "system", 00:11:05.328 "dma_device_type": 1 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.328 "dma_device_type": 2 00:11:05.328 } 00:11:05.328 ], 00:11:05.328 "driver_specific": { 00:11:05.328 "raid": { 00:11:05.328 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:05.328 "strip_size_kb": 64, 00:11:05.328 "state": "online", 00:11:05.328 "raid_level": "raid0", 00:11:05.328 "superblock": true, 00:11:05.328 "num_base_bdevs": 3, 00:11:05.328 "num_base_bdevs_discovered": 3, 00:11:05.328 "num_base_bdevs_operational": 3, 00:11:05.328 "base_bdevs_list": [ 00:11:05.328 { 00:11:05.328 "name": "pt1", 00:11:05.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.328 "is_configured": true, 00:11:05.328 "data_offset": 2048, 00:11:05.328 "data_size": 63488 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "name": "pt2", 00:11:05.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.328 "is_configured": true, 00:11:05.328 "data_offset": 2048, 00:11:05.328 "data_size": 63488 00:11:05.328 }, 00:11:05.328 { 00:11:05.328 "name": "pt3", 00:11:05.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.328 "is_configured": true, 00:11:05.328 "data_offset": 2048, 00:11:05.328 "data_size": 63488 00:11:05.328 } 00:11:05.328 ] 00:11:05.328 } 00:11:05.328 } 00:11:05.328 }' 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.328 pt2 00:11:05.328 pt3' 00:11:05.328 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.588 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-09-27 22:28:01.390773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e2951195-d5c0-4892-b57a-48fc05edb935 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e2951195-d5c0-4892-b57a-48fc05edb935 ']' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-09-27 22:28:01.434435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.589 [2024-09-27 22:28:01.434489] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.589 [2024-09-27 22:28:01.434578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.589 [2024-09-27 22:28:01.434638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.589 [2024-09-27 22:28:01.434649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:05.589 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 [2024-09-27 22:28:01.586318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:05.849 [2024-09-27 22:28:01.588595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:05.849 [2024-09-27 22:28:01.588655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:05.849 [2024-09-27 22:28:01.588723] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:05.849 [2024-09-27 22:28:01.588785] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:05.849 [2024-09-27 22:28:01.588806] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:05.849 [2024-09-27 22:28:01.588827] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.849 [2024-09-27 22:28:01.588838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:05.849 request: 00:11:05.849 { 00:11:05.849 "name": "raid_bdev1", 00:11:05.849 "raid_level": "raid0", 00:11:05.849 "base_bdevs": [ 00:11:05.849 "malloc1", 00:11:05.849 "malloc2", 00:11:05.849 "malloc3" 00:11:05.849 ], 00:11:05.849 "strip_size_kb": 64, 00:11:05.849 "superblock": false, 00:11:05.849 "method": "bdev_raid_create", 00:11:05.849 "req_id": 1 00:11:05.849 } 00:11:05.849 Got JSON-RPC error response 00:11:05.849 response: 00:11:05.849 { 00:11:05.849 "code": -17, 00:11:05.849 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:05.849 } 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.849 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.849 [2024-09-27 22:28:01.658155] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:05.849 [2024-09-27 22:28:01.658211] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.849 [2024-09-27 22:28:01.658234] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:05.850 [2024-09-27 22:28:01.658245] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.850 [2024-09-27 22:28:01.660866] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.850 [2024-09-27 22:28:01.660915] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:05.850 [2024-09-27 22:28:01.661014] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:05.850 [2024-09-27 22:28:01.661082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.850 pt1 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.850 "name": "raid_bdev1", 00:11:05.850 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:05.850 "strip_size_kb": 64, 00:11:05.850 "state": "configuring", 00:11:05.850 "raid_level": "raid0", 00:11:05.850 "superblock": true, 00:11:05.850 "num_base_bdevs": 3, 00:11:05.850 "num_base_bdevs_discovered": 1, 00:11:05.850 "num_base_bdevs_operational": 3, 00:11:05.850 "base_bdevs_list": [ 00:11:05.850 { 00:11:05.850 "name": "pt1", 00:11:05.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.850 "is_configured": true, 00:11:05.850 "data_offset": 2048, 00:11:05.850 "data_size": 63488 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "name": null, 00:11:05.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.850 "is_configured": false, 00:11:05.850 "data_offset": 2048, 00:11:05.850 "data_size": 63488 00:11:05.850 }, 00:11:05.850 { 00:11:05.850 "name": null, 00:11:05.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.850 "is_configured": false, 00:11:05.850 "data_offset": 2048, 00:11:05.850 "data_size": 63488 00:11:05.850 } 00:11:05.850 ] 00:11:05.850 }' 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.850 22:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.418 [2024-09-27 22:28:02.105594] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.418 [2024-09-27 22:28:02.105660] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.418 [2024-09-27 22:28:02.105689] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:06.418 [2024-09-27 22:28:02.105702] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.418 [2024-09-27 22:28:02.106167] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.418 [2024-09-27 22:28:02.106187] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.418 [2024-09-27 22:28:02.106278] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.418 [2024-09-27 22:28:02.106299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.418 pt2 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.418 [2024-09-27 22:28:02.117593] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.418 "name": "raid_bdev1", 00:11:06.418 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:06.418 "strip_size_kb": 64, 00:11:06.418 "state": "configuring", 00:11:06.418 "raid_level": "raid0", 00:11:06.418 "superblock": true, 00:11:06.418 "num_base_bdevs": 3, 00:11:06.418 "num_base_bdevs_discovered": 1, 00:11:06.418 "num_base_bdevs_operational": 3, 00:11:06.418 "base_bdevs_list": [ 00:11:06.418 { 00:11:06.418 "name": "pt1", 00:11:06.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.418 "is_configured": true, 00:11:06.418 "data_offset": 2048, 00:11:06.418 "data_size": 63488 00:11:06.418 }, 00:11:06.418 { 00:11:06.418 "name": null, 00:11:06.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.418 "is_configured": false, 00:11:06.418 "data_offset": 0, 00:11:06.418 "data_size": 63488 00:11:06.418 }, 00:11:06.418 { 00:11:06.418 "name": null, 00:11:06.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.418 "is_configured": false, 00:11:06.418 "data_offset": 2048, 00:11:06.418 "data_size": 63488 00:11:06.418 } 00:11:06.418 ] 00:11:06.418 }' 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.418 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.677 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:06.677 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.677 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.937 [2024-09-27 22:28:02.562166] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.937 [2024-09-27 22:28:02.562238] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.937 [2024-09-27 22:28:02.562258] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:06.937 [2024-09-27 22:28:02.562272] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.937 [2024-09-27 22:28:02.562753] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.937 [2024-09-27 22:28:02.562782] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.937 [2024-09-27 22:28:02.562868] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.937 [2024-09-27 22:28:02.562907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.937 pt2 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.937 [2024-09-27 22:28:02.574144] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.937 [2024-09-27 22:28:02.574193] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.937 [2024-09-27 22:28:02.574211] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:06.937 [2024-09-27 22:28:02.574224] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.937 [2024-09-27 22:28:02.574626] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.937 [2024-09-27 22:28:02.574649] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.937 [2024-09-27 22:28:02.574713] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.937 [2024-09-27 22:28:02.574741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.937 [2024-09-27 22:28:02.574857] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.937 [2024-09-27 22:28:02.574870] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:06.937 [2024-09-27 22:28:02.575153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.937 [2024-09-27 22:28:02.575299] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.937 [2024-09-27 22:28:02.575317] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:06.937 [2024-09-27 22:28:02.575462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.937 pt3 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.937 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.937 "name": "raid_bdev1", 00:11:06.937 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:06.937 "strip_size_kb": 64, 00:11:06.937 "state": "online", 00:11:06.937 "raid_level": "raid0", 00:11:06.937 "superblock": true, 00:11:06.937 "num_base_bdevs": 3, 00:11:06.937 "num_base_bdevs_discovered": 3, 00:11:06.937 "num_base_bdevs_operational": 3, 00:11:06.937 "base_bdevs_list": [ 00:11:06.937 { 00:11:06.937 "name": "pt1", 00:11:06.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.937 "is_configured": true, 00:11:06.937 "data_offset": 2048, 00:11:06.937 "data_size": 63488 00:11:06.937 }, 00:11:06.937 { 00:11:06.938 "name": "pt2", 00:11:06.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.938 "is_configured": true, 00:11:06.938 "data_offset": 2048, 00:11:06.938 "data_size": 63488 00:11:06.938 }, 00:11:06.938 { 00:11:06.938 "name": "pt3", 00:11:06.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.938 "is_configured": true, 00:11:06.938 "data_offset": 2048, 00:11:06.938 "data_size": 63488 00:11:06.938 } 00:11:06.938 ] 00:11:06.938 }' 00:11:06.938 22:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.938 22:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.196 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.196 [2024-09-27 22:28:03.065764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.455 "name": "raid_bdev1", 00:11:07.455 "aliases": [ 00:11:07.455 "e2951195-d5c0-4892-b57a-48fc05edb935" 00:11:07.455 ], 00:11:07.455 "product_name": "Raid Volume", 00:11:07.455 "block_size": 512, 00:11:07.455 "num_blocks": 190464, 00:11:07.455 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:07.455 "assigned_rate_limits": { 00:11:07.455 "rw_ios_per_sec": 0, 00:11:07.455 "rw_mbytes_per_sec": 0, 00:11:07.455 "r_mbytes_per_sec": 0, 00:11:07.455 "w_mbytes_per_sec": 0 00:11:07.455 }, 00:11:07.455 "claimed": false, 00:11:07.455 "zoned": false, 00:11:07.455 "supported_io_types": { 00:11:07.455 "read": true, 00:11:07.455 "write": true, 00:11:07.455 "unmap": true, 00:11:07.455 "flush": true, 00:11:07.455 "reset": true, 00:11:07.455 "nvme_admin": false, 00:11:07.455 "nvme_io": false, 00:11:07.455 "nvme_io_md": false, 00:11:07.455 "write_zeroes": true, 00:11:07.455 "zcopy": false, 00:11:07.455 "get_zone_info": false, 00:11:07.455 "zone_management": false, 00:11:07.455 "zone_append": false, 00:11:07.455 "compare": false, 00:11:07.455 "compare_and_write": false, 00:11:07.455 "abort": false, 00:11:07.455 "seek_hole": false, 00:11:07.455 "seek_data": false, 00:11:07.455 "copy": false, 00:11:07.455 "nvme_iov_md": false 00:11:07.455 }, 00:11:07.455 "memory_domains": [ 00:11:07.455 { 00:11:07.455 "dma_device_id": "system", 00:11:07.455 "dma_device_type": 1 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.455 "dma_device_type": 2 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "dma_device_id": "system", 00:11:07.455 "dma_device_type": 1 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.455 "dma_device_type": 2 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "dma_device_id": "system", 00:11:07.455 "dma_device_type": 1 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.455 "dma_device_type": 2 00:11:07.455 } 00:11:07.455 ], 00:11:07.455 "driver_specific": { 00:11:07.455 "raid": { 00:11:07.455 "uuid": "e2951195-d5c0-4892-b57a-48fc05edb935", 00:11:07.455 "strip_size_kb": 64, 00:11:07.455 "state": "online", 00:11:07.455 "raid_level": "raid0", 00:11:07.455 "superblock": true, 00:11:07.455 "num_base_bdevs": 3, 00:11:07.455 "num_base_bdevs_discovered": 3, 00:11:07.455 "num_base_bdevs_operational": 3, 00:11:07.455 "base_bdevs_list": [ 00:11:07.455 { 00:11:07.455 "name": "pt1", 00:11:07.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.455 "is_configured": true, 00:11:07.455 "data_offset": 2048, 00:11:07.455 "data_size": 63488 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "name": "pt2", 00:11:07.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.455 "is_configured": true, 00:11:07.455 "data_offset": 2048, 00:11:07.455 "data_size": 63488 00:11:07.455 }, 00:11:07.455 { 00:11:07.455 "name": "pt3", 00:11:07.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.455 "is_configured": true, 00:11:07.455 "data_offset": 2048, 00:11:07.455 "data_size": 63488 00:11:07.455 } 00:11:07.455 ] 00:11:07.455 } 00:11:07.455 } 00:11:07.455 }' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.455 pt2 00:11:07.455 pt3' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.455 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 [2024-09-27 22:28:03.357368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e2951195-d5c0-4892-b57a-48fc05edb935 '!=' e2951195-d5c0-4892-b57a-48fc05edb935 ']' 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.714 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65561 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65561 ']' 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65561 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65561 00:11:07.715 killing process with pid 65561 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65561' 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65561 00:11:07.715 [2024-09-27 22:28:03.441277] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.715 [2024-09-27 22:28:03.441378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.715 22:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65561 00:11:07.715 [2024-09-27 22:28:03.441434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.715 [2024-09-27 22:28:03.441451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:07.974 [2024-09-27 22:28:03.762450] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.506 22:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:10.506 00:11:10.506 real 0m6.611s 00:11:10.506 user 0m8.904s 00:11:10.506 sys 0m1.161s 00:11:10.506 22:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.506 22:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.506 ************************************ 00:11:10.506 END TEST raid_superblock_test 00:11:10.506 ************************************ 00:11:10.506 22:28:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:10.506 22:28:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:10.506 22:28:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.506 22:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.506 ************************************ 00:11:10.506 START TEST raid_read_error_test 00:11:10.506 ************************************ 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HxW2EM4v7K 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65825 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65825 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65825 ']' 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.506 22:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.506 [2024-09-27 22:28:05.976362] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:10.506 [2024-09-27 22:28:05.976490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65825 ] 00:11:10.506 [2024-09-27 22:28:06.145499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.766 [2024-09-27 22:28:06.385818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.766 [2024-09-27 22:28:06.627658] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.766 [2024-09-27 22:28:06.627699] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.333 BaseBdev1_malloc 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.333 true 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.333 [2024-09-27 22:28:07.166525] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:11.333 [2024-09-27 22:28:07.166722] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.333 [2024-09-27 22:28:07.166779] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:11.333 [2024-09-27 22:28:07.166866] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.333 [2024-09-27 22:28:07.169332] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.333 [2024-09-27 22:28:07.169475] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:11.333 BaseBdev1 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.333 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 BaseBdev2_malloc 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 true 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 [2024-09-27 22:28:07.240004] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:11.638 [2024-09-27 22:28:07.240066] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.638 [2024-09-27 22:28:07.240087] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:11.638 [2024-09-27 22:28:07.240101] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.638 [2024-09-27 22:28:07.242486] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.638 [2024-09-27 22:28:07.242532] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:11.638 BaseBdev2 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 BaseBdev3_malloc 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 true 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 [2024-09-27 22:28:07.313367] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:11.638 [2024-09-27 22:28:07.313563] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.638 [2024-09-27 22:28:07.313623] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:11.638 [2024-09-27 22:28:07.313760] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.638 [2024-09-27 22:28:07.316247] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.638 [2024-09-27 22:28:07.316295] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:11.638 BaseBdev3 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 [2024-09-27 22:28:07.325439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.638 [2024-09-27 22:28:07.327649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.638 [2024-09-27 22:28:07.327864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.638 [2024-09-27 22:28:07.328107] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:11.638 [2024-09-27 22:28:07.328122] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:11.638 [2024-09-27 22:28:07.328425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:11.638 [2024-09-27 22:28:07.328589] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:11.638 [2024-09-27 22:28:07.328603] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:11.638 [2024-09-27 22:28:07.328778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.638 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.638 "name": "raid_bdev1", 00:11:11.638 "uuid": "227baee9-9b33-49ce-8f28-6d0d94aaba2f", 00:11:11.638 "strip_size_kb": 64, 00:11:11.638 "state": "online", 00:11:11.638 "raid_level": "raid0", 00:11:11.638 "superblock": true, 00:11:11.638 "num_base_bdevs": 3, 00:11:11.638 "num_base_bdevs_discovered": 3, 00:11:11.638 "num_base_bdevs_operational": 3, 00:11:11.638 "base_bdevs_list": [ 00:11:11.638 { 00:11:11.638 "name": "BaseBdev1", 00:11:11.638 "uuid": "5f71c4fd-3ba9-5668-acef-e0ebb8975ca8", 00:11:11.638 "is_configured": true, 00:11:11.638 "data_offset": 2048, 00:11:11.638 "data_size": 63488 00:11:11.638 }, 00:11:11.638 { 00:11:11.638 "name": "BaseBdev2", 00:11:11.638 "uuid": "b5c61224-0d65-5dc5-911b-a2b0e1e2c462", 00:11:11.638 "is_configured": true, 00:11:11.638 "data_offset": 2048, 00:11:11.638 "data_size": 63488 00:11:11.638 }, 00:11:11.638 { 00:11:11.639 "name": "BaseBdev3", 00:11:11.639 "uuid": "135d51e6-5c59-5b77-b88b-26a40e350df3", 00:11:11.639 "is_configured": true, 00:11:11.639 "data_offset": 2048, 00:11:11.639 "data_size": 63488 00:11:11.639 } 00:11:11.639 ] 00:11:11.639 }' 00:11:11.639 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.639 22:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.902 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:11.902 22:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.159 [2024-09-27 22:28:07.830042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.094 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.094 "name": "raid_bdev1", 00:11:13.094 "uuid": "227baee9-9b33-49ce-8f28-6d0d94aaba2f", 00:11:13.094 "strip_size_kb": 64, 00:11:13.094 "state": "online", 00:11:13.094 "raid_level": "raid0", 00:11:13.094 "superblock": true, 00:11:13.094 "num_base_bdevs": 3, 00:11:13.094 "num_base_bdevs_discovered": 3, 00:11:13.094 "num_base_bdevs_operational": 3, 00:11:13.094 "base_bdevs_list": [ 00:11:13.094 { 00:11:13.094 "name": "BaseBdev1", 00:11:13.094 "uuid": "5f71c4fd-3ba9-5668-acef-e0ebb8975ca8", 00:11:13.094 "is_configured": true, 00:11:13.094 "data_offset": 2048, 00:11:13.094 "data_size": 63488 00:11:13.094 }, 00:11:13.094 { 00:11:13.094 "name": "BaseBdev2", 00:11:13.094 "uuid": "b5c61224-0d65-5dc5-911b-a2b0e1e2c462", 00:11:13.094 "is_configured": true, 00:11:13.094 "data_offset": 2048, 00:11:13.094 "data_size": 63488 00:11:13.094 }, 00:11:13.094 { 00:11:13.094 "name": "BaseBdev3", 00:11:13.095 "uuid": "135d51e6-5c59-5b77-b88b-26a40e350df3", 00:11:13.095 "is_configured": true, 00:11:13.095 "data_offset": 2048, 00:11:13.095 "data_size": 63488 00:11:13.095 } 00:11:13.095 ] 00:11:13.095 }' 00:11:13.095 22:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.095 22:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.353 22:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:13.353 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.353 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.353 [2024-09-27 22:28:09.168824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.353 [2024-09-27 22:28:09.168866] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.353 [2024-09-27 22:28:09.171454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.353 [2024-09-27 22:28:09.171507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.353 [2024-09-27 22:28:09.171545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.353 [2024-09-27 22:28:09.171556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:13.353 { 00:11:13.353 "results": [ 00:11:13.354 { 00:11:13.354 "job": "raid_bdev1", 00:11:13.354 "core_mask": "0x1", 00:11:13.354 "workload": "randrw", 00:11:13.354 "percentage": 50, 00:11:13.354 "status": "finished", 00:11:13.354 "queue_depth": 1, 00:11:13.354 "io_size": 131072, 00:11:13.354 "runtime": 1.338888, 00:11:13.354 "iops": 16059.59572421293, 00:11:13.354 "mibps": 2007.4494655266162, 00:11:13.354 "io_failed": 1, 00:11:13.354 "io_timeout": 0, 00:11:13.354 "avg_latency_us": 85.84398291673881, 00:11:13.354 "min_latency_us": 19.43132530120482, 00:11:13.354 "max_latency_us": 1414.6827309236949 00:11:13.354 } 00:11:13.354 ], 00:11:13.354 "core_count": 1 00:11:13.354 } 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65825 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65825 ']' 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65825 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65825 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.354 killing process with pid 65825 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65825' 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65825 00:11:13.354 [2024-09-27 22:28:09.217044] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.354 22:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65825 00:11:13.612 [2024-09-27 22:28:09.460035] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HxW2EM4v7K 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:16.142 00:11:16.142 real 0m5.651s 00:11:16.142 user 0m6.380s 00:11:16.142 sys 0m0.690s 00:11:16.142 ************************************ 00:11:16.142 END TEST raid_read_error_test 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.142 22:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.142 ************************************ 00:11:16.142 22:28:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:16.142 22:28:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:16.142 22:28:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.142 22:28:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.142 ************************************ 00:11:16.142 START TEST raid_write_error_test 00:11:16.142 ************************************ 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d11Y2uloEn 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65982 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65982 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65982 ']' 00:11:16.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.142 22:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.142 [2024-09-27 22:28:11.705891] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:16.142 [2024-09-27 22:28:11.706263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65982 ] 00:11:16.142 [2024-09-27 22:28:11.880505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.401 [2024-09-27 22:28:12.112862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.659 [2024-09-27 22:28:12.337294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.659 [2024-09-27 22:28:12.337363] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 BaseBdev1_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 true 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 [2024-09-27 22:28:12.883850] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.224 [2024-09-27 22:28:12.883913] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.224 [2024-09-27 22:28:12.883933] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.224 [2024-09-27 22:28:12.883948] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.224 [2024-09-27 22:28:12.886376] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.224 BaseBdev1 00:11:17.224 [2024-09-27 22:28:12.886536] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 BaseBdev2_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 true 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 [2024-09-27 22:28:12.959194] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.224 [2024-09-27 22:28:12.959384] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.224 [2024-09-27 22:28:12.959440] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.224 [2024-09-27 22:28:12.959521] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.224 [2024-09-27 22:28:12.962005] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.224 [2024-09-27 22:28:12.962140] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.224 BaseBdev2 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 BaseBdev3_malloc 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 true 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 [2024-09-27 22:28:13.034143] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.224 [2024-09-27 22:28:13.034197] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.224 [2024-09-27 22:28:13.034218] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.224 [2024-09-27 22:28:13.034232] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.224 [2024-09-27 22:28:13.036727] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.224 [2024-09-27 22:28:13.036767] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.224 BaseBdev3 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.224 [2024-09-27 22:28:13.046199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.224 [2024-09-27 22:28:13.048363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.224 [2024-09-27 22:28:13.048443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.224 [2024-09-27 22:28:13.048670] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.224 [2024-09-27 22:28:13.048691] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:17.224 [2024-09-27 22:28:13.048995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.224 [2024-09-27 22:28:13.049202] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.224 [2024-09-27 22:28:13.049223] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:17.224 [2024-09-27 22:28:13.049400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:17.224 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.225 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.482 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.482 "name": "raid_bdev1", 00:11:17.482 "uuid": "cf0fb82d-4614-4701-82a2-dbc7839629ae", 00:11:17.482 "strip_size_kb": 64, 00:11:17.482 "state": "online", 00:11:17.482 "raid_level": "raid0", 00:11:17.482 "superblock": true, 00:11:17.482 "num_base_bdevs": 3, 00:11:17.482 "num_base_bdevs_discovered": 3, 00:11:17.482 "num_base_bdevs_operational": 3, 00:11:17.482 "base_bdevs_list": [ 00:11:17.482 { 00:11:17.482 "name": "BaseBdev1", 00:11:17.482 "uuid": "cf9e3f03-cfac-5172-ad04-0803edb75e5a", 00:11:17.482 "is_configured": true, 00:11:17.482 "data_offset": 2048, 00:11:17.482 "data_size": 63488 00:11:17.482 }, 00:11:17.482 { 00:11:17.482 "name": "BaseBdev2", 00:11:17.482 "uuid": "71fef8d8-69ad-5055-81b1-9c9dec2dde56", 00:11:17.482 "is_configured": true, 00:11:17.482 "data_offset": 2048, 00:11:17.482 "data_size": 63488 00:11:17.482 }, 00:11:17.482 { 00:11:17.482 "name": "BaseBdev3", 00:11:17.482 "uuid": "0397ff09-9fda-50b9-acce-2c76f48638dd", 00:11:17.482 "is_configured": true, 00:11:17.482 "data_offset": 2048, 00:11:17.482 "data_size": 63488 00:11:17.482 } 00:11:17.482 ] 00:11:17.482 }' 00:11:17.482 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.482 22:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.739 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:17.739 22:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:17.739 [2024-09-27 22:28:13.590944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.671 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.927 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.927 "name": "raid_bdev1", 00:11:18.927 "uuid": "cf0fb82d-4614-4701-82a2-dbc7839629ae", 00:11:18.927 "strip_size_kb": 64, 00:11:18.927 "state": "online", 00:11:18.927 "raid_level": "raid0", 00:11:18.927 "superblock": true, 00:11:18.927 "num_base_bdevs": 3, 00:11:18.927 "num_base_bdevs_discovered": 3, 00:11:18.927 "num_base_bdevs_operational": 3, 00:11:18.927 "base_bdevs_list": [ 00:11:18.927 { 00:11:18.927 "name": "BaseBdev1", 00:11:18.927 "uuid": "cf9e3f03-cfac-5172-ad04-0803edb75e5a", 00:11:18.927 "is_configured": true, 00:11:18.927 "data_offset": 2048, 00:11:18.927 "data_size": 63488 00:11:18.927 }, 00:11:18.927 { 00:11:18.927 "name": "BaseBdev2", 00:11:18.927 "uuid": "71fef8d8-69ad-5055-81b1-9c9dec2dde56", 00:11:18.927 "is_configured": true, 00:11:18.927 "data_offset": 2048, 00:11:18.927 "data_size": 63488 00:11:18.927 }, 00:11:18.927 { 00:11:18.927 "name": "BaseBdev3", 00:11:18.927 "uuid": "0397ff09-9fda-50b9-acce-2c76f48638dd", 00:11:18.927 "is_configured": true, 00:11:18.927 "data_offset": 2048, 00:11:18.927 "data_size": 63488 00:11:18.927 } 00:11:18.927 ] 00:11:18.927 }' 00:11:18.927 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.927 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.185 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.185 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.185 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.185 [2024-09-27 22:28:14.919176] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.185 [2024-09-27 22:28:14.919211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.185 [2024-09-27 22:28:14.921751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.185 [2024-09-27 22:28:14.921800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.185 [2024-09-27 22:28:14.921839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.185 [2024-09-27 22:28:14.921850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:19.185 { 00:11:19.185 "results": [ 00:11:19.185 { 00:11:19.185 "job": "raid_bdev1", 00:11:19.185 "core_mask": "0x1", 00:11:19.185 "workload": "randrw", 00:11:19.185 "percentage": 50, 00:11:19.185 "status": "finished", 00:11:19.185 "queue_depth": 1, 00:11:19.185 "io_size": 131072, 00:11:19.185 "runtime": 1.328233, 00:11:19.185 "iops": 16188.424771858552, 00:11:19.185 "mibps": 2023.553096482319, 00:11:19.185 "io_failed": 1, 00:11:19.185 "io_timeout": 0, 00:11:19.185 "avg_latency_us": 85.1242611706184, 00:11:19.185 "min_latency_us": 18.814457831325303, 00:11:19.185 "max_latency_us": 1401.5228915662651 00:11:19.185 } 00:11:19.185 ], 00:11:19.186 "core_count": 1 00:11:19.186 } 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65982 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65982 ']' 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65982 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65982 00:11:19.186 killing process with pid 65982 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65982' 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65982 00:11:19.186 [2024-09-27 22:28:14.974445] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.186 22:28:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65982 00:11:19.443 [2024-09-27 22:28:15.217670] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d11Y2uloEn 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:21.996 00:11:21.996 real 0m5.701s 00:11:21.996 user 0m6.464s 00:11:21.996 sys 0m0.688s 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.996 22:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.996 ************************************ 00:11:21.996 END TEST raid_write_error_test 00:11:21.996 ************************************ 00:11:21.996 22:28:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:21.996 22:28:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:21.996 22:28:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:21.996 22:28:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.996 22:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.996 ************************************ 00:11:21.996 START TEST raid_state_function_test 00:11:21.996 ************************************ 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66131 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.996 Process raid pid: 66131 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66131' 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66131 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 66131 ']' 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.996 22:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.996 [2024-09-27 22:28:17.477862] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:21.996 [2024-09-27 22:28:17.477998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.996 [2024-09-27 22:28:17.654476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.254 [2024-09-27 22:28:17.893845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.512 [2024-09-27 22:28:18.141814] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.512 [2024-09-27 22:28:18.141857] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.770 [2024-09-27 22:28:18.614862] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.770 [2024-09-27 22:28:18.614922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.770 [2024-09-27 22:28:18.614932] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.770 [2024-09-27 22:28:18.614947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.770 [2024-09-27 22:28:18.614955] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.770 [2024-09-27 22:28:18.614967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.770 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.028 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.028 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.028 "name": "Existed_Raid", 00:11:23.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.028 "strip_size_kb": 64, 00:11:23.028 "state": "configuring", 00:11:23.028 "raid_level": "concat", 00:11:23.028 "superblock": false, 00:11:23.028 "num_base_bdevs": 3, 00:11:23.028 "num_base_bdevs_discovered": 0, 00:11:23.028 "num_base_bdevs_operational": 3, 00:11:23.028 "base_bdevs_list": [ 00:11:23.028 { 00:11:23.028 "name": "BaseBdev1", 00:11:23.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.028 "is_configured": false, 00:11:23.028 "data_offset": 0, 00:11:23.028 "data_size": 0 00:11:23.028 }, 00:11:23.028 { 00:11:23.028 "name": "BaseBdev2", 00:11:23.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.028 "is_configured": false, 00:11:23.028 "data_offset": 0, 00:11:23.028 "data_size": 0 00:11:23.028 }, 00:11:23.028 { 00:11:23.028 "name": "BaseBdev3", 00:11:23.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.028 "is_configured": false, 00:11:23.028 "data_offset": 0, 00:11:23.028 "data_size": 0 00:11:23.028 } 00:11:23.028 ] 00:11:23.028 }' 00:11:23.028 22:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.028 22:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 [2024-09-27 22:28:19.050203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.286 [2024-09-27 22:28:19.050249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 [2024-09-27 22:28:19.062199] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.286 [2024-09-27 22:28:19.062262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.286 [2024-09-27 22:28:19.062272] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.286 [2024-09-27 22:28:19.062285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.286 [2024-09-27 22:28:19.062293] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.286 [2024-09-27 22:28:19.062305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 [2024-09-27 22:28:19.117771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.286 BaseBdev1 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.286 [ 00:11:23.286 { 00:11:23.286 "name": "BaseBdev1", 00:11:23.286 "aliases": [ 00:11:23.286 "748fb5e1-0a33-4b41-bb26-c7eba586ef3b" 00:11:23.286 ], 00:11:23.286 "product_name": "Malloc disk", 00:11:23.286 "block_size": 512, 00:11:23.286 "num_blocks": 65536, 00:11:23.286 "uuid": "748fb5e1-0a33-4b41-bb26-c7eba586ef3b", 00:11:23.286 "assigned_rate_limits": { 00:11:23.286 "rw_ios_per_sec": 0, 00:11:23.286 "rw_mbytes_per_sec": 0, 00:11:23.286 "r_mbytes_per_sec": 0, 00:11:23.286 "w_mbytes_per_sec": 0 00:11:23.286 }, 00:11:23.286 "claimed": true, 00:11:23.286 "claim_type": "exclusive_write", 00:11:23.286 "zoned": false, 00:11:23.286 "supported_io_types": { 00:11:23.286 "read": true, 00:11:23.286 "write": true, 00:11:23.286 "unmap": true, 00:11:23.286 "flush": true, 00:11:23.286 "reset": true, 00:11:23.286 "nvme_admin": false, 00:11:23.286 "nvme_io": false, 00:11:23.286 "nvme_io_md": false, 00:11:23.286 "write_zeroes": true, 00:11:23.286 "zcopy": true, 00:11:23.286 "get_zone_info": false, 00:11:23.286 "zone_management": false, 00:11:23.286 "zone_append": false, 00:11:23.286 "compare": false, 00:11:23.286 "compare_and_write": false, 00:11:23.286 "abort": true, 00:11:23.286 "seek_hole": false, 00:11:23.286 "seek_data": false, 00:11:23.286 "copy": true, 00:11:23.286 "nvme_iov_md": false 00:11:23.286 }, 00:11:23.286 "memory_domains": [ 00:11:23.286 { 00:11:23.286 "dma_device_id": "system", 00:11:23.286 "dma_device_type": 1 00:11:23.286 }, 00:11:23.286 { 00:11:23.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.286 "dma_device_type": 2 00:11:23.286 } 00:11:23.286 ], 00:11:23.286 "driver_specific": {} 00:11:23.286 } 00:11:23.286 ] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.286 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.287 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.287 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.287 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.287 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.287 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.544 "name": "Existed_Raid", 00:11:23.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.544 "strip_size_kb": 64, 00:11:23.544 "state": "configuring", 00:11:23.544 "raid_level": "concat", 00:11:23.544 "superblock": false, 00:11:23.544 "num_base_bdevs": 3, 00:11:23.544 "num_base_bdevs_discovered": 1, 00:11:23.544 "num_base_bdevs_operational": 3, 00:11:23.544 "base_bdevs_list": [ 00:11:23.544 { 00:11:23.544 "name": "BaseBdev1", 00:11:23.544 "uuid": "748fb5e1-0a33-4b41-bb26-c7eba586ef3b", 00:11:23.544 "is_configured": true, 00:11:23.544 "data_offset": 0, 00:11:23.544 "data_size": 65536 00:11:23.544 }, 00:11:23.544 { 00:11:23.544 "name": "BaseBdev2", 00:11:23.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.544 "is_configured": false, 00:11:23.544 "data_offset": 0, 00:11:23.544 "data_size": 0 00:11:23.544 }, 00:11:23.544 { 00:11:23.544 "name": "BaseBdev3", 00:11:23.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.544 "is_configured": false, 00:11:23.544 "data_offset": 0, 00:11:23.544 "data_size": 0 00:11:23.544 } 00:11:23.544 ] 00:11:23.544 }' 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.544 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.802 [2024-09-27 22:28:19.601165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.802 [2024-09-27 22:28:19.601237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.802 [2024-09-27 22:28:19.613194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.802 [2024-09-27 22:28:19.615493] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.802 [2024-09-27 22:28:19.615548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.802 [2024-09-27 22:28:19.615560] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.802 [2024-09-27 22:28:19.615573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.802 "name": "Existed_Raid", 00:11:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.802 "strip_size_kb": 64, 00:11:23.802 "state": "configuring", 00:11:23.802 "raid_level": "concat", 00:11:23.802 "superblock": false, 00:11:23.802 "num_base_bdevs": 3, 00:11:23.802 "num_base_bdevs_discovered": 1, 00:11:23.802 "num_base_bdevs_operational": 3, 00:11:23.802 "base_bdevs_list": [ 00:11:23.802 { 00:11:23.802 "name": "BaseBdev1", 00:11:23.802 "uuid": "748fb5e1-0a33-4b41-bb26-c7eba586ef3b", 00:11:23.802 "is_configured": true, 00:11:23.802 "data_offset": 0, 00:11:23.802 "data_size": 65536 00:11:23.802 }, 00:11:23.802 { 00:11:23.802 "name": "BaseBdev2", 00:11:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.802 "is_configured": false, 00:11:23.802 "data_offset": 0, 00:11:23.802 "data_size": 0 00:11:23.802 }, 00:11:23.802 { 00:11:23.802 "name": "BaseBdev3", 00:11:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.802 "is_configured": false, 00:11:23.802 "data_offset": 0, 00:11:23.802 "data_size": 0 00:11:23.802 } 00:11:23.802 ] 00:11:23.802 }' 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.802 22:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.397 [2024-09-27 22:28:20.090588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.397 BaseBdev2 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.397 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.397 [ 00:11:24.397 { 00:11:24.397 "name": "BaseBdev2", 00:11:24.397 "aliases": [ 00:11:24.397 "9b587840-5133-40a6-a7ce-d979140ed2e9" 00:11:24.397 ], 00:11:24.397 "product_name": "Malloc disk", 00:11:24.397 "block_size": 512, 00:11:24.397 "num_blocks": 65536, 00:11:24.397 "uuid": "9b587840-5133-40a6-a7ce-d979140ed2e9", 00:11:24.397 "assigned_rate_limits": { 00:11:24.397 "rw_ios_per_sec": 0, 00:11:24.397 "rw_mbytes_per_sec": 0, 00:11:24.397 "r_mbytes_per_sec": 0, 00:11:24.397 "w_mbytes_per_sec": 0 00:11:24.397 }, 00:11:24.397 "claimed": true, 00:11:24.397 "claim_type": "exclusive_write", 00:11:24.397 "zoned": false, 00:11:24.397 "supported_io_types": { 00:11:24.397 "read": true, 00:11:24.397 "write": true, 00:11:24.397 "unmap": true, 00:11:24.397 "flush": true, 00:11:24.397 "reset": true, 00:11:24.397 "nvme_admin": false, 00:11:24.397 "nvme_io": false, 00:11:24.397 "nvme_io_md": false, 00:11:24.397 "write_zeroes": true, 00:11:24.397 "zcopy": true, 00:11:24.397 "get_zone_info": false, 00:11:24.397 "zone_management": false, 00:11:24.397 "zone_append": false, 00:11:24.397 "compare": false, 00:11:24.397 "compare_and_write": false, 00:11:24.397 "abort": true, 00:11:24.397 "seek_hole": false, 00:11:24.397 "seek_data": false, 00:11:24.397 "copy": true, 00:11:24.397 "nvme_iov_md": false 00:11:24.397 }, 00:11:24.397 "memory_domains": [ 00:11:24.397 { 00:11:24.397 "dma_device_id": "system", 00:11:24.398 "dma_device_type": 1 00:11:24.398 }, 00:11:24.398 { 00:11:24.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.398 "dma_device_type": 2 00:11:24.398 } 00:11:24.398 ], 00:11:24.398 "driver_specific": {} 00:11:24.398 } 00:11:24.398 ] 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.398 "name": "Existed_Raid", 00:11:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.398 "strip_size_kb": 64, 00:11:24.398 "state": "configuring", 00:11:24.398 "raid_level": "concat", 00:11:24.398 "superblock": false, 00:11:24.398 "num_base_bdevs": 3, 00:11:24.398 "num_base_bdevs_discovered": 2, 00:11:24.398 "num_base_bdevs_operational": 3, 00:11:24.398 "base_bdevs_list": [ 00:11:24.398 { 00:11:24.398 "name": "BaseBdev1", 00:11:24.398 "uuid": "748fb5e1-0a33-4b41-bb26-c7eba586ef3b", 00:11:24.398 "is_configured": true, 00:11:24.398 "data_offset": 0, 00:11:24.398 "data_size": 65536 00:11:24.398 }, 00:11:24.398 { 00:11:24.398 "name": "BaseBdev2", 00:11:24.398 "uuid": "9b587840-5133-40a6-a7ce-d979140ed2e9", 00:11:24.398 "is_configured": true, 00:11:24.398 "data_offset": 0, 00:11:24.398 "data_size": 65536 00:11:24.398 }, 00:11:24.398 { 00:11:24.398 "name": "BaseBdev3", 00:11:24.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.398 "is_configured": false, 00:11:24.398 "data_offset": 0, 00:11:24.398 "data_size": 0 00:11:24.398 } 00:11:24.398 ] 00:11:24.398 }' 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.398 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.656 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.656 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.656 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.915 [2024-09-27 22:28:20.576067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.915 [2024-09-27 22:28:20.576124] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.915 [2024-09-27 22:28:20.576139] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:24.915 [2024-09-27 22:28:20.576416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:24.915 [2024-09-27 22:28:20.576706] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.915 [2024-09-27 22:28:20.576727] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.915 [2024-09-27 22:28:20.576991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.915 BaseBdev3 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.915 [ 00:11:24.915 { 00:11:24.915 "name": "BaseBdev3", 00:11:24.915 "aliases": [ 00:11:24.915 "f6739f88-888e-45d2-8146-3de656d210f5" 00:11:24.915 ], 00:11:24.915 "product_name": "Malloc disk", 00:11:24.915 "block_size": 512, 00:11:24.915 "num_blocks": 65536, 00:11:24.915 "uuid": "f6739f88-888e-45d2-8146-3de656d210f5", 00:11:24.915 "assigned_rate_limits": { 00:11:24.915 "rw_ios_per_sec": 0, 00:11:24.915 "rw_mbytes_per_sec": 0, 00:11:24.915 "r_mbytes_per_sec": 0, 00:11:24.915 "w_mbytes_per_sec": 0 00:11:24.915 }, 00:11:24.915 "claimed": true, 00:11:24.915 "claim_type": "exclusive_write", 00:11:24.915 "zoned": false, 00:11:24.915 "supported_io_types": { 00:11:24.915 "read": true, 00:11:24.915 "write": true, 00:11:24.915 "unmap": true, 00:11:24.915 "flush": true, 00:11:24.915 "reset": true, 00:11:24.915 "nvme_admin": false, 00:11:24.915 "nvme_io": false, 00:11:24.915 "nvme_io_md": false, 00:11:24.915 "write_zeroes": true, 00:11:24.915 "zcopy": true, 00:11:24.915 "get_zone_info": false, 00:11:24.915 "zone_management": false, 00:11:24.915 "zone_append": false, 00:11:24.915 "compare": false, 00:11:24.915 "compare_and_write": false, 00:11:24.915 "abort": true, 00:11:24.915 "seek_hole": false, 00:11:24.915 "seek_data": false, 00:11:24.915 "copy": true, 00:11:24.915 "nvme_iov_md": false 00:11:24.915 }, 00:11:24.915 "memory_domains": [ 00:11:24.915 { 00:11:24.915 "dma_device_id": "system", 00:11:24.915 "dma_device_type": 1 00:11:24.915 }, 00:11:24.915 { 00:11:24.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.915 "dma_device_type": 2 00:11:24.915 } 00:11:24.915 ], 00:11:24.915 "driver_specific": {} 00:11:24.915 } 00:11:24.915 ] 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.915 "name": "Existed_Raid", 00:11:24.915 "uuid": "70ac7140-e4e8-42dc-898c-79f479fc8475", 00:11:24.915 "strip_size_kb": 64, 00:11:24.915 "state": "online", 00:11:24.915 "raid_level": "concat", 00:11:24.915 "superblock": false, 00:11:24.915 "num_base_bdevs": 3, 00:11:24.915 "num_base_bdevs_discovered": 3, 00:11:24.915 "num_base_bdevs_operational": 3, 00:11:24.915 "base_bdevs_list": [ 00:11:24.915 { 00:11:24.915 "name": "BaseBdev1", 00:11:24.915 "uuid": "748fb5e1-0a33-4b41-bb26-c7eba586ef3b", 00:11:24.915 "is_configured": true, 00:11:24.915 "data_offset": 0, 00:11:24.915 "data_size": 65536 00:11:24.915 }, 00:11:24.915 { 00:11:24.915 "name": "BaseBdev2", 00:11:24.915 "uuid": "9b587840-5133-40a6-a7ce-d979140ed2e9", 00:11:24.915 "is_configured": true, 00:11:24.915 "data_offset": 0, 00:11:24.915 "data_size": 65536 00:11:24.915 }, 00:11:24.915 { 00:11:24.915 "name": "BaseBdev3", 00:11:24.915 "uuid": "f6739f88-888e-45d2-8146-3de656d210f5", 00:11:24.915 "is_configured": true, 00:11:24.915 "data_offset": 0, 00:11:24.915 "data_size": 65536 00:11:24.915 } 00:11:24.915 ] 00:11:24.915 }' 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.915 22:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.174 [2024-09-27 22:28:21.027814] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.174 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.433 "name": "Existed_Raid", 00:11:25.433 "aliases": [ 00:11:25.433 "70ac7140-e4e8-42dc-898c-79f479fc8475" 00:11:25.433 ], 00:11:25.433 "product_name": "Raid Volume", 00:11:25.433 "block_size": 512, 00:11:25.433 "num_blocks": 196608, 00:11:25.433 "uuid": "70ac7140-e4e8-42dc-898c-79f479fc8475", 00:11:25.433 "assigned_rate_limits": { 00:11:25.433 "rw_ios_per_sec": 0, 00:11:25.433 "rw_mbytes_per_sec": 0, 00:11:25.433 "r_mbytes_per_sec": 0, 00:11:25.433 "w_mbytes_per_sec": 0 00:11:25.433 }, 00:11:25.433 "claimed": false, 00:11:25.433 "zoned": false, 00:11:25.433 "supported_io_types": { 00:11:25.433 "read": true, 00:11:25.433 "write": true, 00:11:25.433 "unmap": true, 00:11:25.433 "flush": true, 00:11:25.433 "reset": true, 00:11:25.433 "nvme_admin": false, 00:11:25.433 "nvme_io": false, 00:11:25.433 "nvme_io_md": false, 00:11:25.433 "write_zeroes": true, 00:11:25.433 "zcopy": false, 00:11:25.433 "get_zone_info": false, 00:11:25.433 "zone_management": false, 00:11:25.433 "zone_append": false, 00:11:25.433 "compare": false, 00:11:25.433 "compare_and_write": false, 00:11:25.433 "abort": false, 00:11:25.433 "seek_hole": false, 00:11:25.433 "seek_data": false, 00:11:25.433 "copy": false, 00:11:25.433 "nvme_iov_md": false 00:11:25.433 }, 00:11:25.433 "memory_domains": [ 00:11:25.433 { 00:11:25.433 "dma_device_id": "system", 00:11:25.433 "dma_device_type": 1 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.433 "dma_device_type": 2 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "dma_device_id": "system", 00:11:25.433 "dma_device_type": 1 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.433 "dma_device_type": 2 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "dma_device_id": "system", 00:11:25.433 "dma_device_type": 1 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.433 "dma_device_type": 2 00:11:25.433 } 00:11:25.433 ], 00:11:25.433 "driver_specific": { 00:11:25.433 "raid": { 00:11:25.433 "uuid": "70ac7140-e4e8-42dc-898c-79f479fc8475", 00:11:25.433 "strip_size_kb": 64, 00:11:25.433 "state": "online", 00:11:25.433 "raid_level": "concat", 00:11:25.433 "superblock": false, 00:11:25.433 "num_base_bdevs": 3, 00:11:25.433 "num_base_bdevs_discovered": 3, 00:11:25.433 "num_base_bdevs_operational": 3, 00:11:25.433 "base_bdevs_list": [ 00:11:25.433 { 00:11:25.433 "name": "BaseBdev1", 00:11:25.433 "uuid": "748fb5e1-0a33-4b41-bb26-c7eba586ef3b", 00:11:25.433 "is_configured": true, 00:11:25.433 "data_offset": 0, 00:11:25.433 "data_size": 65536 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "name": "BaseBdev2", 00:11:25.433 "uuid": "9b587840-5133-40a6-a7ce-d979140ed2e9", 00:11:25.433 "is_configured": true, 00:11:25.433 "data_offset": 0, 00:11:25.433 "data_size": 65536 00:11:25.433 }, 00:11:25.433 { 00:11:25.433 "name": "BaseBdev3", 00:11:25.433 "uuid": "f6739f88-888e-45d2-8146-3de656d210f5", 00:11:25.433 "is_configured": true, 00:11:25.433 "data_offset": 0, 00:11:25.433 "data_size": 65536 00:11:25.433 } 00:11:25.433 ] 00:11:25.433 } 00:11:25.433 } 00:11:25.433 }' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:25.433 BaseBdev2 00:11:25.433 BaseBdev3' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.433 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.433 [2024-09-27 22:28:21.295258] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.433 [2024-09-27 22:28:21.295291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.433 [2024-09-27 22:28:21.295366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.709 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.710 "name": "Existed_Raid", 00:11:25.710 "uuid": "70ac7140-e4e8-42dc-898c-79f479fc8475", 00:11:25.710 "strip_size_kb": 64, 00:11:25.710 "state": "offline", 00:11:25.710 "raid_level": "concat", 00:11:25.710 "superblock": false, 00:11:25.710 "num_base_bdevs": 3, 00:11:25.710 "num_base_bdevs_discovered": 2, 00:11:25.710 "num_base_bdevs_operational": 2, 00:11:25.710 "base_bdevs_list": [ 00:11:25.710 { 00:11:25.710 "name": null, 00:11:25.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.710 "is_configured": false, 00:11:25.710 "data_offset": 0, 00:11:25.710 "data_size": 65536 00:11:25.710 }, 00:11:25.710 { 00:11:25.710 "name": "BaseBdev2", 00:11:25.710 "uuid": "9b587840-5133-40a6-a7ce-d979140ed2e9", 00:11:25.710 "is_configured": true, 00:11:25.710 "data_offset": 0, 00:11:25.710 "data_size": 65536 00:11:25.710 }, 00:11:25.710 { 00:11:25.710 "name": "BaseBdev3", 00:11:25.710 "uuid": "f6739f88-888e-45d2-8146-3de656d210f5", 00:11:25.710 "is_configured": true, 00:11:25.710 "data_offset": 0, 00:11:25.710 "data_size": 65536 00:11:25.710 } 00:11:25.710 ] 00:11:25.710 }' 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.710 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.968 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.968 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.226 [2024-09-27 22:28:21.896589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.226 22:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.226 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.226 [2024-09-27 22:28:22.053797] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.226 [2024-09-27 22:28:22.053856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.484 BaseBdev2 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.484 [ 00:11:26.484 { 00:11:26.484 "name": "BaseBdev2", 00:11:26.484 "aliases": [ 00:11:26.484 "07e913e6-573d-4efb-9cce-9a58d1530a9b" 00:11:26.484 ], 00:11:26.484 "product_name": "Malloc disk", 00:11:26.484 "block_size": 512, 00:11:26.484 "num_blocks": 65536, 00:11:26.484 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:26.484 "assigned_rate_limits": { 00:11:26.484 "rw_ios_per_sec": 0, 00:11:26.484 "rw_mbytes_per_sec": 0, 00:11:26.484 "r_mbytes_per_sec": 0, 00:11:26.484 "w_mbytes_per_sec": 0 00:11:26.484 }, 00:11:26.484 "claimed": false, 00:11:26.484 "zoned": false, 00:11:26.484 "supported_io_types": { 00:11:26.484 "read": true, 00:11:26.484 "write": true, 00:11:26.484 "unmap": true, 00:11:26.484 "flush": true, 00:11:26.484 "reset": true, 00:11:26.484 "nvme_admin": false, 00:11:26.484 "nvme_io": false, 00:11:26.484 "nvme_io_md": false, 00:11:26.484 "write_zeroes": true, 00:11:26.484 "zcopy": true, 00:11:26.484 "get_zone_info": false, 00:11:26.484 "zone_management": false, 00:11:26.484 "zone_append": false, 00:11:26.484 "compare": false, 00:11:26.484 "compare_and_write": false, 00:11:26.484 "abort": true, 00:11:26.484 "seek_hole": false, 00:11:26.484 "seek_data": false, 00:11:26.484 "copy": true, 00:11:26.484 "nvme_iov_md": false 00:11:26.484 }, 00:11:26.484 "memory_domains": [ 00:11:26.484 { 00:11:26.484 "dma_device_id": "system", 00:11:26.484 "dma_device_type": 1 00:11:26.484 }, 00:11:26.484 { 00:11:26.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.484 "dma_device_type": 2 00:11:26.484 } 00:11:26.484 ], 00:11:26.484 "driver_specific": {} 00:11:26.484 } 00:11:26.484 ] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.484 BaseBdev3 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.484 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.742 [ 00:11:26.742 { 00:11:26.742 "name": "BaseBdev3", 00:11:26.742 "aliases": [ 00:11:26.742 "41711741-a776-4f0a-aeb9-7180040b6baa" 00:11:26.742 ], 00:11:26.742 "product_name": "Malloc disk", 00:11:26.742 "block_size": 512, 00:11:26.742 "num_blocks": 65536, 00:11:26.742 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:26.742 "assigned_rate_limits": { 00:11:26.742 "rw_ios_per_sec": 0, 00:11:26.742 "rw_mbytes_per_sec": 0, 00:11:26.742 "r_mbytes_per_sec": 0, 00:11:26.742 "w_mbytes_per_sec": 0 00:11:26.742 }, 00:11:26.742 "claimed": false, 00:11:26.742 "zoned": false, 00:11:26.742 "supported_io_types": { 00:11:26.742 "read": true, 00:11:26.742 "write": true, 00:11:26.742 "unmap": true, 00:11:26.742 "flush": true, 00:11:26.742 "reset": true, 00:11:26.742 "nvme_admin": false, 00:11:26.742 "nvme_io": false, 00:11:26.742 "nvme_io_md": false, 00:11:26.742 "write_zeroes": true, 00:11:26.742 "zcopy": true, 00:11:26.742 "get_zone_info": false, 00:11:26.742 "zone_management": false, 00:11:26.742 "zone_append": false, 00:11:26.742 "compare": false, 00:11:26.742 "compare_and_write": false, 00:11:26.742 "abort": true, 00:11:26.742 "seek_hole": false, 00:11:26.742 "seek_data": false, 00:11:26.742 "copy": true, 00:11:26.742 "nvme_iov_md": false 00:11:26.742 }, 00:11:26.742 "memory_domains": [ 00:11:26.742 { 00:11:26.742 "dma_device_id": "system", 00:11:26.742 "dma_device_type": 1 00:11:26.742 }, 00:11:26.742 { 00:11:26.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.742 "dma_device_type": 2 00:11:26.742 } 00:11:26.742 ], 00:11:26.742 "driver_specific": {} 00:11:26.742 } 00:11:26.742 ] 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.742 [2024-09-27 22:28:22.390554] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.742 [2024-09-27 22:28:22.390608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.742 [2024-09-27 22:28:22.390635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.742 [2024-09-27 22:28:22.392827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.742 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.742 "name": "Existed_Raid", 00:11:26.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.742 "strip_size_kb": 64, 00:11:26.742 "state": "configuring", 00:11:26.742 "raid_level": "concat", 00:11:26.742 "superblock": false, 00:11:26.742 "num_base_bdevs": 3, 00:11:26.742 "num_base_bdevs_discovered": 2, 00:11:26.743 "num_base_bdevs_operational": 3, 00:11:26.743 "base_bdevs_list": [ 00:11:26.743 { 00:11:26.743 "name": "BaseBdev1", 00:11:26.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.743 "is_configured": false, 00:11:26.743 "data_offset": 0, 00:11:26.743 "data_size": 0 00:11:26.743 }, 00:11:26.743 { 00:11:26.743 "name": "BaseBdev2", 00:11:26.743 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:26.743 "is_configured": true, 00:11:26.743 "data_offset": 0, 00:11:26.743 "data_size": 65536 00:11:26.743 }, 00:11:26.743 { 00:11:26.743 "name": "BaseBdev3", 00:11:26.743 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:26.743 "is_configured": true, 00:11:26.743 "data_offset": 0, 00:11:26.743 "data_size": 65536 00:11:26.743 } 00:11:26.743 ] 00:11:26.743 }' 00:11:26.743 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.743 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 [2024-09-27 22:28:22.805940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.999 "name": "Existed_Raid", 00:11:26.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.999 "strip_size_kb": 64, 00:11:26.999 "state": "configuring", 00:11:26.999 "raid_level": "concat", 00:11:26.999 "superblock": false, 00:11:26.999 "num_base_bdevs": 3, 00:11:26.999 "num_base_bdevs_discovered": 1, 00:11:26.999 "num_base_bdevs_operational": 3, 00:11:26.999 "base_bdevs_list": [ 00:11:26.999 { 00:11:26.999 "name": "BaseBdev1", 00:11:26.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.999 "is_configured": false, 00:11:26.999 "data_offset": 0, 00:11:26.999 "data_size": 0 00:11:26.999 }, 00:11:26.999 { 00:11:26.999 "name": null, 00:11:26.999 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:26.999 "is_configured": false, 00:11:26.999 "data_offset": 0, 00:11:26.999 "data_size": 65536 00:11:26.999 }, 00:11:26.999 { 00:11:26.999 "name": "BaseBdev3", 00:11:26.999 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:26.999 "is_configured": true, 00:11:26.999 "data_offset": 0, 00:11:26.999 "data_size": 65536 00:11:26.999 } 00:11:26.999 ] 00:11:26.999 }' 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.999 22:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.565 [2024-09-27 22:28:23.292139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.565 BaseBdev1 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.565 [ 00:11:27.565 { 00:11:27.565 "name": "BaseBdev1", 00:11:27.565 "aliases": [ 00:11:27.565 "652c7651-62eb-4e69-bdf1-979d0bc04113" 00:11:27.565 ], 00:11:27.565 "product_name": "Malloc disk", 00:11:27.565 "block_size": 512, 00:11:27.565 "num_blocks": 65536, 00:11:27.565 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:27.565 "assigned_rate_limits": { 00:11:27.565 "rw_ios_per_sec": 0, 00:11:27.565 "rw_mbytes_per_sec": 0, 00:11:27.565 "r_mbytes_per_sec": 0, 00:11:27.565 "w_mbytes_per_sec": 0 00:11:27.565 }, 00:11:27.565 "claimed": true, 00:11:27.565 "claim_type": "exclusive_write", 00:11:27.565 "zoned": false, 00:11:27.565 "supported_io_types": { 00:11:27.565 "read": true, 00:11:27.565 "write": true, 00:11:27.565 "unmap": true, 00:11:27.565 "flush": true, 00:11:27.565 "reset": true, 00:11:27.565 "nvme_admin": false, 00:11:27.565 "nvme_io": false, 00:11:27.565 "nvme_io_md": false, 00:11:27.565 "write_zeroes": true, 00:11:27.565 "zcopy": true, 00:11:27.565 "get_zone_info": false, 00:11:27.565 "zone_management": false, 00:11:27.565 "zone_append": false, 00:11:27.565 "compare": false, 00:11:27.565 "compare_and_write": false, 00:11:27.565 "abort": true, 00:11:27.565 "seek_hole": false, 00:11:27.565 "seek_data": false, 00:11:27.565 "copy": true, 00:11:27.565 "nvme_iov_md": false 00:11:27.565 }, 00:11:27.565 "memory_domains": [ 00:11:27.565 { 00:11:27.565 "dma_device_id": "system", 00:11:27.565 "dma_device_type": 1 00:11:27.565 }, 00:11:27.565 { 00:11:27.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.565 "dma_device_type": 2 00:11:27.565 } 00:11:27.565 ], 00:11:27.565 "driver_specific": {} 00:11:27.565 } 00:11:27.565 ] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.565 "name": "Existed_Raid", 00:11:27.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.565 "strip_size_kb": 64, 00:11:27.565 "state": "configuring", 00:11:27.565 "raid_level": "concat", 00:11:27.565 "superblock": false, 00:11:27.565 "num_base_bdevs": 3, 00:11:27.565 "num_base_bdevs_discovered": 2, 00:11:27.565 "num_base_bdevs_operational": 3, 00:11:27.565 "base_bdevs_list": [ 00:11:27.565 { 00:11:27.565 "name": "BaseBdev1", 00:11:27.565 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:27.565 "is_configured": true, 00:11:27.565 "data_offset": 0, 00:11:27.565 "data_size": 65536 00:11:27.565 }, 00:11:27.565 { 00:11:27.565 "name": null, 00:11:27.565 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:27.565 "is_configured": false, 00:11:27.565 "data_offset": 0, 00:11:27.565 "data_size": 65536 00:11:27.565 }, 00:11:27.565 { 00:11:27.565 "name": "BaseBdev3", 00:11:27.565 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:27.565 "is_configured": true, 00:11:27.565 "data_offset": 0, 00:11:27.565 "data_size": 65536 00:11:27.565 } 00:11:27.565 ] 00:11:27.565 }' 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.565 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.131 [2024-09-27 22:28:23.843490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.131 "name": "Existed_Raid", 00:11:28.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.131 "strip_size_kb": 64, 00:11:28.131 "state": "configuring", 00:11:28.131 "raid_level": "concat", 00:11:28.131 "superblock": false, 00:11:28.131 "num_base_bdevs": 3, 00:11:28.131 "num_base_bdevs_discovered": 1, 00:11:28.131 "num_base_bdevs_operational": 3, 00:11:28.131 "base_bdevs_list": [ 00:11:28.131 { 00:11:28.131 "name": "BaseBdev1", 00:11:28.131 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:28.131 "is_configured": true, 00:11:28.131 "data_offset": 0, 00:11:28.131 "data_size": 65536 00:11:28.131 }, 00:11:28.131 { 00:11:28.131 "name": null, 00:11:28.131 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:28.131 "is_configured": false, 00:11:28.131 "data_offset": 0, 00:11:28.131 "data_size": 65536 00:11:28.131 }, 00:11:28.131 { 00:11:28.131 "name": null, 00:11:28.131 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:28.131 "is_configured": false, 00:11:28.131 "data_offset": 0, 00:11:28.131 "data_size": 65536 00:11:28.131 } 00:11:28.131 ] 00:11:28.131 }' 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.131 22:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.390 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.390 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.390 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.390 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.390 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.648 [2024-09-27 22:28:24.307275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.648 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.648 "name": "Existed_Raid", 00:11:28.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.648 "strip_size_kb": 64, 00:11:28.648 "state": "configuring", 00:11:28.648 "raid_level": "concat", 00:11:28.648 "superblock": false, 00:11:28.648 "num_base_bdevs": 3, 00:11:28.648 "num_base_bdevs_discovered": 2, 00:11:28.648 "num_base_bdevs_operational": 3, 00:11:28.648 "base_bdevs_list": [ 00:11:28.648 { 00:11:28.648 "name": "BaseBdev1", 00:11:28.648 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:28.648 "is_configured": true, 00:11:28.648 "data_offset": 0, 00:11:28.648 "data_size": 65536 00:11:28.648 }, 00:11:28.648 { 00:11:28.648 "name": null, 00:11:28.649 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:28.649 "is_configured": false, 00:11:28.649 "data_offset": 0, 00:11:28.649 "data_size": 65536 00:11:28.649 }, 00:11:28.649 { 00:11:28.649 "name": "BaseBdev3", 00:11:28.649 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:28.649 "is_configured": true, 00:11:28.649 "data_offset": 0, 00:11:28.649 "data_size": 65536 00:11:28.649 } 00:11:28.649 ] 00:11:28.649 }' 00:11:28.649 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.649 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.907 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.165 [2024-09-27 22:28:24.787346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.165 "name": "Existed_Raid", 00:11:29.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.165 "strip_size_kb": 64, 00:11:29.165 "state": "configuring", 00:11:29.165 "raid_level": "concat", 00:11:29.165 "superblock": false, 00:11:29.165 "num_base_bdevs": 3, 00:11:29.165 "num_base_bdevs_discovered": 1, 00:11:29.165 "num_base_bdevs_operational": 3, 00:11:29.165 "base_bdevs_list": [ 00:11:29.165 { 00:11:29.165 "name": null, 00:11:29.165 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:29.165 "is_configured": false, 00:11:29.165 "data_offset": 0, 00:11:29.165 "data_size": 65536 00:11:29.165 }, 00:11:29.165 { 00:11:29.165 "name": null, 00:11:29.165 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:29.165 "is_configured": false, 00:11:29.165 "data_offset": 0, 00:11:29.165 "data_size": 65536 00:11:29.165 }, 00:11:29.165 { 00:11:29.165 "name": "BaseBdev3", 00:11:29.165 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:29.165 "is_configured": true, 00:11:29.165 "data_offset": 0, 00:11:29.165 "data_size": 65536 00:11:29.165 } 00:11:29.165 ] 00:11:29.165 }' 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.165 22:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.731 [2024-09-27 22:28:25.412360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.731 "name": "Existed_Raid", 00:11:29.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.731 "strip_size_kb": 64, 00:11:29.731 "state": "configuring", 00:11:29.731 "raid_level": "concat", 00:11:29.731 "superblock": false, 00:11:29.731 "num_base_bdevs": 3, 00:11:29.731 "num_base_bdevs_discovered": 2, 00:11:29.731 "num_base_bdevs_operational": 3, 00:11:29.731 "base_bdevs_list": [ 00:11:29.731 { 00:11:29.731 "name": null, 00:11:29.731 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:29.731 "is_configured": false, 00:11:29.731 "data_offset": 0, 00:11:29.731 "data_size": 65536 00:11:29.731 }, 00:11:29.731 { 00:11:29.731 "name": "BaseBdev2", 00:11:29.731 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:29.731 "is_configured": true, 00:11:29.731 "data_offset": 0, 00:11:29.731 "data_size": 65536 00:11:29.731 }, 00:11:29.731 { 00:11:29.731 "name": "BaseBdev3", 00:11:29.731 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:29.731 "is_configured": true, 00:11:29.731 "data_offset": 0, 00:11:29.731 "data_size": 65536 00:11:29.731 } 00:11:29.731 ] 00:11:29.731 }' 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.731 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.990 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.990 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.990 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.990 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 652c7651-62eb-4e69-bdf1-979d0bc04113 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 [2024-09-27 22:28:25.983878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.249 NewBaseBdev 00:11:30.249 [2024-09-27 22:28:25.984128] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.249 [2024-09-27 22:28:25.984155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:30.249 [2024-09-27 22:28:25.984446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:30.249 [2024-09-27 22:28:25.984590] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.249 [2024-09-27 22:28:25.984600] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.249 [2024-09-27 22:28:25.984853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 22:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 [ 00:11:30.249 { 00:11:30.249 "name": "NewBaseBdev", 00:11:30.249 "aliases": [ 00:11:30.249 "652c7651-62eb-4e69-bdf1-979d0bc04113" 00:11:30.249 ], 00:11:30.249 "product_name": "Malloc disk", 00:11:30.249 "block_size": 512, 00:11:30.249 "num_blocks": 65536, 00:11:30.249 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:30.249 "assigned_rate_limits": { 00:11:30.249 "rw_ios_per_sec": 0, 00:11:30.249 "rw_mbytes_per_sec": 0, 00:11:30.249 "r_mbytes_per_sec": 0, 00:11:30.249 "w_mbytes_per_sec": 0 00:11:30.249 }, 00:11:30.249 "claimed": true, 00:11:30.249 "claim_type": "exclusive_write", 00:11:30.249 "zoned": false, 00:11:30.249 "supported_io_types": { 00:11:30.249 "read": true, 00:11:30.249 "write": true, 00:11:30.249 "unmap": true, 00:11:30.249 "flush": true, 00:11:30.249 "reset": true, 00:11:30.249 "nvme_admin": false, 00:11:30.249 "nvme_io": false, 00:11:30.249 "nvme_io_md": false, 00:11:30.249 "write_zeroes": true, 00:11:30.249 "zcopy": true, 00:11:30.249 "get_zone_info": false, 00:11:30.249 "zone_management": false, 00:11:30.249 "zone_append": false, 00:11:30.249 "compare": false, 00:11:30.249 "compare_and_write": false, 00:11:30.249 "abort": true, 00:11:30.249 "seek_hole": false, 00:11:30.249 "seek_data": false, 00:11:30.249 "copy": true, 00:11:30.249 "nvme_iov_md": false 00:11:30.249 }, 00:11:30.249 "memory_domains": [ 00:11:30.249 { 00:11:30.249 "dma_device_id": "system", 00:11:30.249 "dma_device_type": 1 00:11:30.249 }, 00:11:30.249 { 00:11:30.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.249 "dma_device_type": 2 00:11:30.249 } 00:11:30.249 ], 00:11:30.249 "driver_specific": {} 00:11:30.249 } 00:11:30.249 ] 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.249 "name": "Existed_Raid", 00:11:30.249 "uuid": "fe021ea5-f450-46b4-9eab-ec2c050eb902", 00:11:30.249 "strip_size_kb": 64, 00:11:30.249 "state": "online", 00:11:30.249 "raid_level": "concat", 00:11:30.249 "superblock": false, 00:11:30.249 "num_base_bdevs": 3, 00:11:30.249 "num_base_bdevs_discovered": 3, 00:11:30.249 "num_base_bdevs_operational": 3, 00:11:30.249 "base_bdevs_list": [ 00:11:30.249 { 00:11:30.249 "name": "NewBaseBdev", 00:11:30.249 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:30.249 "is_configured": true, 00:11:30.249 "data_offset": 0, 00:11:30.249 "data_size": 65536 00:11:30.249 }, 00:11:30.249 { 00:11:30.249 "name": "BaseBdev2", 00:11:30.249 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:30.249 "is_configured": true, 00:11:30.250 "data_offset": 0, 00:11:30.250 "data_size": 65536 00:11:30.250 }, 00:11:30.250 { 00:11:30.250 "name": "BaseBdev3", 00:11:30.250 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:30.250 "is_configured": true, 00:11:30.250 "data_offset": 0, 00:11:30.250 "data_size": 65536 00:11:30.250 } 00:11:30.250 ] 00:11:30.250 }' 00:11:30.250 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.250 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.817 [2024-09-27 22:28:26.483543] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.817 "name": "Existed_Raid", 00:11:30.817 "aliases": [ 00:11:30.817 "fe021ea5-f450-46b4-9eab-ec2c050eb902" 00:11:30.817 ], 00:11:30.817 "product_name": "Raid Volume", 00:11:30.817 "block_size": 512, 00:11:30.817 "num_blocks": 196608, 00:11:30.817 "uuid": "fe021ea5-f450-46b4-9eab-ec2c050eb902", 00:11:30.817 "assigned_rate_limits": { 00:11:30.817 "rw_ios_per_sec": 0, 00:11:30.817 "rw_mbytes_per_sec": 0, 00:11:30.817 "r_mbytes_per_sec": 0, 00:11:30.817 "w_mbytes_per_sec": 0 00:11:30.817 }, 00:11:30.817 "claimed": false, 00:11:30.817 "zoned": false, 00:11:30.817 "supported_io_types": { 00:11:30.817 "read": true, 00:11:30.817 "write": true, 00:11:30.817 "unmap": true, 00:11:30.817 "flush": true, 00:11:30.817 "reset": true, 00:11:30.817 "nvme_admin": false, 00:11:30.817 "nvme_io": false, 00:11:30.817 "nvme_io_md": false, 00:11:30.817 "write_zeroes": true, 00:11:30.817 "zcopy": false, 00:11:30.817 "get_zone_info": false, 00:11:30.817 "zone_management": false, 00:11:30.817 "zone_append": false, 00:11:30.817 "compare": false, 00:11:30.817 "compare_and_write": false, 00:11:30.817 "abort": false, 00:11:30.817 "seek_hole": false, 00:11:30.817 "seek_data": false, 00:11:30.817 "copy": false, 00:11:30.817 "nvme_iov_md": false 00:11:30.817 }, 00:11:30.817 "memory_domains": [ 00:11:30.817 { 00:11:30.817 "dma_device_id": "system", 00:11:30.817 "dma_device_type": 1 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.817 "dma_device_type": 2 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "dma_device_id": "system", 00:11:30.817 "dma_device_type": 1 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.817 "dma_device_type": 2 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "dma_device_id": "system", 00:11:30.817 "dma_device_type": 1 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.817 "dma_device_type": 2 00:11:30.817 } 00:11:30.817 ], 00:11:30.817 "driver_specific": { 00:11:30.817 "raid": { 00:11:30.817 "uuid": "fe021ea5-f450-46b4-9eab-ec2c050eb902", 00:11:30.817 "strip_size_kb": 64, 00:11:30.817 "state": "online", 00:11:30.817 "raid_level": "concat", 00:11:30.817 "superblock": false, 00:11:30.817 "num_base_bdevs": 3, 00:11:30.817 "num_base_bdevs_discovered": 3, 00:11:30.817 "num_base_bdevs_operational": 3, 00:11:30.817 "base_bdevs_list": [ 00:11:30.817 { 00:11:30.817 "name": "NewBaseBdev", 00:11:30.817 "uuid": "652c7651-62eb-4e69-bdf1-979d0bc04113", 00:11:30.817 "is_configured": true, 00:11:30.817 "data_offset": 0, 00:11:30.817 "data_size": 65536 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "name": "BaseBdev2", 00:11:30.817 "uuid": "07e913e6-573d-4efb-9cce-9a58d1530a9b", 00:11:30.817 "is_configured": true, 00:11:30.817 "data_offset": 0, 00:11:30.817 "data_size": 65536 00:11:30.817 }, 00:11:30.817 { 00:11:30.817 "name": "BaseBdev3", 00:11:30.817 "uuid": "41711741-a776-4f0a-aeb9-7180040b6baa", 00:11:30.817 "is_configured": true, 00:11:30.817 "data_offset": 0, 00:11:30.817 "data_size": 65536 00:11:30.817 } 00:11:30.817 ] 00:11:30.817 } 00:11:30.817 } 00:11:30.817 }' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.817 BaseBdev2 00:11:30.817 BaseBdev3' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.817 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.074 [2024-09-27 22:28:26.723212] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.074 [2024-09-27 22:28:26.723357] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.074 [2024-09-27 22:28:26.723464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.074 [2024-09-27 22:28:26.723520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.074 [2024-09-27 22:28:26.723536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66131 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 66131 ']' 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 66131 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66131 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.074 killing process with pid 66131 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66131' 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 66131 00:11:31.074 [2024-09-27 22:28:26.771208] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.074 22:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 66131 00:11:31.331 [2024-09-27 22:28:27.077951] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.231 22:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:33.231 00:11:33.231 real 0m11.685s 00:11:33.232 user 0m17.839s 00:11:33.232 sys 0m2.192s 00:11:33.232 22:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.232 22:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.232 ************************************ 00:11:33.232 END TEST raid_state_function_test 00:11:33.232 ************************************ 00:11:33.490 22:28:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:33.490 22:28:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:33.490 22:28:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.490 22:28:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.490 ************************************ 00:11:33.490 START TEST raid_state_function_test_sb 00:11:33.490 ************************************ 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66769 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66769' 00:11:33.490 Process raid pid: 66769 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66769 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66769 ']' 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.490 22:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.490 [2024-09-27 22:28:29.242574] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:33.491 [2024-09-27 22:28:29.242703] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.749 [2024-09-27 22:28:29.414667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.007 [2024-09-27 22:28:29.652782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.302 [2024-09-27 22:28:29.895952] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.302 [2024-09-27 22:28:29.895999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.571 [2024-09-27 22:28:30.382682] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.571 [2024-09-27 22:28:30.382744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.571 [2024-09-27 22:28:30.382756] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.571 [2024-09-27 22:28:30.382771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.571 [2024-09-27 22:28:30.382778] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.571 [2024-09-27 22:28:30.382791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.571 "name": "Existed_Raid", 00:11:34.571 "uuid": "4983aeee-e43c-4a6d-8b43-c7f3c64884e3", 00:11:34.571 "strip_size_kb": 64, 00:11:34.571 "state": "configuring", 00:11:34.571 "raid_level": "concat", 00:11:34.571 "superblock": true, 00:11:34.571 "num_base_bdevs": 3, 00:11:34.571 "num_base_bdevs_discovered": 0, 00:11:34.571 "num_base_bdevs_operational": 3, 00:11:34.571 "base_bdevs_list": [ 00:11:34.571 { 00:11:34.571 "name": "BaseBdev1", 00:11:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.571 "is_configured": false, 00:11:34.571 "data_offset": 0, 00:11:34.571 "data_size": 0 00:11:34.571 }, 00:11:34.571 { 00:11:34.571 "name": "BaseBdev2", 00:11:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.571 "is_configured": false, 00:11:34.571 "data_offset": 0, 00:11:34.571 "data_size": 0 00:11:34.571 }, 00:11:34.571 { 00:11:34.571 "name": "BaseBdev3", 00:11:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.571 "is_configured": false, 00:11:34.571 "data_offset": 0, 00:11:34.571 "data_size": 0 00:11:34.571 } 00:11:34.571 ] 00:11:34.571 }' 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.571 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 [2024-09-27 22:28:30.818025] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.138 [2024-09-27 22:28:30.818083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 [2024-09-27 22:28:30.830022] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.138 [2024-09-27 22:28:30.830070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.138 [2024-09-27 22:28:30.830080] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.138 [2024-09-27 22:28:30.830093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.138 [2024-09-27 22:28:30.830101] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.138 [2024-09-27 22:28:30.830113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 [2024-09-27 22:28:30.883055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.138 BaseBdev1 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 [ 00:11:35.138 { 00:11:35.138 "name": "BaseBdev1", 00:11:35.138 "aliases": [ 00:11:35.138 "364f9952-8636-4dc2-b4a0-505327e580c7" 00:11:35.138 ], 00:11:35.138 "product_name": "Malloc disk", 00:11:35.138 "block_size": 512, 00:11:35.138 "num_blocks": 65536, 00:11:35.138 "uuid": "364f9952-8636-4dc2-b4a0-505327e580c7", 00:11:35.138 "assigned_rate_limits": { 00:11:35.138 "rw_ios_per_sec": 0, 00:11:35.138 "rw_mbytes_per_sec": 0, 00:11:35.138 "r_mbytes_per_sec": 0, 00:11:35.138 "w_mbytes_per_sec": 0 00:11:35.138 }, 00:11:35.138 "claimed": true, 00:11:35.138 "claim_type": "exclusive_write", 00:11:35.138 "zoned": false, 00:11:35.138 "supported_io_types": { 00:11:35.138 "read": true, 00:11:35.138 "write": true, 00:11:35.138 "unmap": true, 00:11:35.138 "flush": true, 00:11:35.138 "reset": true, 00:11:35.138 "nvme_admin": false, 00:11:35.138 "nvme_io": false, 00:11:35.138 "nvme_io_md": false, 00:11:35.138 "write_zeroes": true, 00:11:35.138 "zcopy": true, 00:11:35.138 "get_zone_info": false, 00:11:35.138 "zone_management": false, 00:11:35.138 "zone_append": false, 00:11:35.138 "compare": false, 00:11:35.138 "compare_and_write": false, 00:11:35.138 "abort": true, 00:11:35.138 "seek_hole": false, 00:11:35.138 "seek_data": false, 00:11:35.138 "copy": true, 00:11:35.138 "nvme_iov_md": false 00:11:35.138 }, 00:11:35.138 "memory_domains": [ 00:11:35.138 { 00:11:35.138 "dma_device_id": "system", 00:11:35.138 "dma_device_type": 1 00:11:35.138 }, 00:11:35.138 { 00:11:35.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.138 "dma_device_type": 2 00:11:35.138 } 00:11:35.138 ], 00:11:35.138 "driver_specific": {} 00:11:35.138 } 00:11:35.138 ] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.138 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.139 "name": "Existed_Raid", 00:11:35.139 "uuid": "119716b9-0b89-4998-9e96-705e73c9c299", 00:11:35.139 "strip_size_kb": 64, 00:11:35.139 "state": "configuring", 00:11:35.139 "raid_level": "concat", 00:11:35.139 "superblock": true, 00:11:35.139 "num_base_bdevs": 3, 00:11:35.139 "num_base_bdevs_discovered": 1, 00:11:35.139 "num_base_bdevs_operational": 3, 00:11:35.139 "base_bdevs_list": [ 00:11:35.139 { 00:11:35.139 "name": "BaseBdev1", 00:11:35.139 "uuid": "364f9952-8636-4dc2-b4a0-505327e580c7", 00:11:35.139 "is_configured": true, 00:11:35.139 "data_offset": 2048, 00:11:35.139 "data_size": 63488 00:11:35.139 }, 00:11:35.139 { 00:11:35.139 "name": "BaseBdev2", 00:11:35.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.139 "is_configured": false, 00:11:35.139 "data_offset": 0, 00:11:35.139 "data_size": 0 00:11:35.139 }, 00:11:35.139 { 00:11:35.139 "name": "BaseBdev3", 00:11:35.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.139 "is_configured": false, 00:11:35.139 "data_offset": 0, 00:11:35.139 "data_size": 0 00:11:35.139 } 00:11:35.139 ] 00:11:35.139 }' 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.139 22:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.705 [2024-09-27 22:28:31.358549] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.705 [2024-09-27 22:28:31.358758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.705 [2024-09-27 22:28:31.370587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.705 [2024-09-27 22:28:31.373061] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.705 [2024-09-27 22:28:31.373231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.705 [2024-09-27 22:28:31.373324] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.705 [2024-09-27 22:28:31.373372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:35.705 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.706 "name": "Existed_Raid", 00:11:35.706 "uuid": "64569b8d-0d4c-4fd8-bf10-76341c4fc012", 00:11:35.706 "strip_size_kb": 64, 00:11:35.706 "state": "configuring", 00:11:35.706 "raid_level": "concat", 00:11:35.706 "superblock": true, 00:11:35.706 "num_base_bdevs": 3, 00:11:35.706 "num_base_bdevs_discovered": 1, 00:11:35.706 "num_base_bdevs_operational": 3, 00:11:35.706 "base_bdevs_list": [ 00:11:35.706 { 00:11:35.706 "name": "BaseBdev1", 00:11:35.706 "uuid": "364f9952-8636-4dc2-b4a0-505327e580c7", 00:11:35.706 "is_configured": true, 00:11:35.706 "data_offset": 2048, 00:11:35.706 "data_size": 63488 00:11:35.706 }, 00:11:35.706 { 00:11:35.706 "name": "BaseBdev2", 00:11:35.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.706 "is_configured": false, 00:11:35.706 "data_offset": 0, 00:11:35.706 "data_size": 0 00:11:35.706 }, 00:11:35.706 { 00:11:35.706 "name": "BaseBdev3", 00:11:35.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.706 "is_configured": false, 00:11:35.706 "data_offset": 0, 00:11:35.706 "data_size": 0 00:11:35.706 } 00:11:35.706 ] 00:11:35.706 }' 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.706 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.964 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.964 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.964 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.225 [2024-09-27 22:28:31.866995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.225 BaseBdev2 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.225 [ 00:11:36.225 { 00:11:36.225 "name": "BaseBdev2", 00:11:36.225 "aliases": [ 00:11:36.225 "cdc34d2f-8fb7-4ca7-a909-73e6883af897" 00:11:36.225 ], 00:11:36.225 "product_name": "Malloc disk", 00:11:36.225 "block_size": 512, 00:11:36.225 "num_blocks": 65536, 00:11:36.225 "uuid": "cdc34d2f-8fb7-4ca7-a909-73e6883af897", 00:11:36.225 "assigned_rate_limits": { 00:11:36.225 "rw_ios_per_sec": 0, 00:11:36.225 "rw_mbytes_per_sec": 0, 00:11:36.225 "r_mbytes_per_sec": 0, 00:11:36.225 "w_mbytes_per_sec": 0 00:11:36.225 }, 00:11:36.225 "claimed": true, 00:11:36.225 "claim_type": "exclusive_write", 00:11:36.225 "zoned": false, 00:11:36.225 "supported_io_types": { 00:11:36.225 "read": true, 00:11:36.225 "write": true, 00:11:36.225 "unmap": true, 00:11:36.225 "flush": true, 00:11:36.225 "reset": true, 00:11:36.225 "nvme_admin": false, 00:11:36.225 "nvme_io": false, 00:11:36.225 "nvme_io_md": false, 00:11:36.225 "write_zeroes": true, 00:11:36.225 "zcopy": true, 00:11:36.225 "get_zone_info": false, 00:11:36.225 "zone_management": false, 00:11:36.225 "zone_append": false, 00:11:36.225 "compare": false, 00:11:36.225 "compare_and_write": false, 00:11:36.225 "abort": true, 00:11:36.225 "seek_hole": false, 00:11:36.225 "seek_data": false, 00:11:36.225 "copy": true, 00:11:36.225 "nvme_iov_md": false 00:11:36.225 }, 00:11:36.225 "memory_domains": [ 00:11:36.225 { 00:11:36.225 "dma_device_id": "system", 00:11:36.225 "dma_device_type": 1 00:11:36.225 }, 00:11:36.225 { 00:11:36.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.225 "dma_device_type": 2 00:11:36.225 } 00:11:36.225 ], 00:11:36.225 "driver_specific": {} 00:11:36.225 } 00:11:36.225 ] 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.225 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.226 "name": "Existed_Raid", 00:11:36.226 "uuid": "64569b8d-0d4c-4fd8-bf10-76341c4fc012", 00:11:36.226 "strip_size_kb": 64, 00:11:36.226 "state": "configuring", 00:11:36.226 "raid_level": "concat", 00:11:36.226 "superblock": true, 00:11:36.226 "num_base_bdevs": 3, 00:11:36.226 "num_base_bdevs_discovered": 2, 00:11:36.226 "num_base_bdevs_operational": 3, 00:11:36.226 "base_bdevs_list": [ 00:11:36.226 { 00:11:36.226 "name": "BaseBdev1", 00:11:36.226 "uuid": "364f9952-8636-4dc2-b4a0-505327e580c7", 00:11:36.226 "is_configured": true, 00:11:36.226 "data_offset": 2048, 00:11:36.226 "data_size": 63488 00:11:36.226 }, 00:11:36.226 { 00:11:36.226 "name": "BaseBdev2", 00:11:36.226 "uuid": "cdc34d2f-8fb7-4ca7-a909-73e6883af897", 00:11:36.226 "is_configured": true, 00:11:36.226 "data_offset": 2048, 00:11:36.226 "data_size": 63488 00:11:36.226 }, 00:11:36.226 { 00:11:36.226 "name": "BaseBdev3", 00:11:36.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.226 "is_configured": false, 00:11:36.226 "data_offset": 0, 00:11:36.226 "data_size": 0 00:11:36.226 } 00:11:36.226 ] 00:11:36.226 }' 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.226 22:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.484 [2024-09-27 22:28:32.337764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.484 [2024-09-27 22:28:32.338024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.484 [2024-09-27 22:28:32.338055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:36.484 [2024-09-27 22:28:32.338326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:36.484 [2024-09-27 22:28:32.338486] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.484 [2024-09-27 22:28:32.338500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.484 [2024-09-27 22:28:32.338634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.484 BaseBdev3 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.484 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.485 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.485 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.485 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.485 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.485 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.743 [ 00:11:36.743 { 00:11:36.743 "name": "BaseBdev3", 00:11:36.743 "aliases": [ 00:11:36.743 "3a6fff67-5128-4a95-8b6f-b609e50be5ab" 00:11:36.743 ], 00:11:36.743 "product_name": "Malloc disk", 00:11:36.743 "block_size": 512, 00:11:36.743 "num_blocks": 65536, 00:11:36.743 "uuid": "3a6fff67-5128-4a95-8b6f-b609e50be5ab", 00:11:36.743 "assigned_rate_limits": { 00:11:36.743 "rw_ios_per_sec": 0, 00:11:36.743 "rw_mbytes_per_sec": 0, 00:11:36.743 "r_mbytes_per_sec": 0, 00:11:36.743 "w_mbytes_per_sec": 0 00:11:36.743 }, 00:11:36.743 "claimed": true, 00:11:36.743 "claim_type": "exclusive_write", 00:11:36.743 "zoned": false, 00:11:36.743 "supported_io_types": { 00:11:36.743 "read": true, 00:11:36.743 "write": true, 00:11:36.743 "unmap": true, 00:11:36.743 "flush": true, 00:11:36.743 "reset": true, 00:11:36.743 "nvme_admin": false, 00:11:36.743 "nvme_io": false, 00:11:36.744 "nvme_io_md": false, 00:11:36.744 "write_zeroes": true, 00:11:36.744 "zcopy": true, 00:11:36.744 "get_zone_info": false, 00:11:36.744 "zone_management": false, 00:11:36.744 "zone_append": false, 00:11:36.744 "compare": false, 00:11:36.744 "compare_and_write": false, 00:11:36.744 "abort": true, 00:11:36.744 "seek_hole": false, 00:11:36.744 "seek_data": false, 00:11:36.744 "copy": true, 00:11:36.744 "nvme_iov_md": false 00:11:36.744 }, 00:11:36.744 "memory_domains": [ 00:11:36.744 { 00:11:36.744 "dma_device_id": "system", 00:11:36.744 "dma_device_type": 1 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.744 "dma_device_type": 2 00:11:36.744 } 00:11:36.744 ], 00:11:36.744 "driver_specific": {} 00:11:36.744 } 00:11:36.744 ] 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.744 "name": "Existed_Raid", 00:11:36.744 "uuid": "64569b8d-0d4c-4fd8-bf10-76341c4fc012", 00:11:36.744 "strip_size_kb": 64, 00:11:36.744 "state": "online", 00:11:36.744 "raid_level": "concat", 00:11:36.744 "superblock": true, 00:11:36.744 "num_base_bdevs": 3, 00:11:36.744 "num_base_bdevs_discovered": 3, 00:11:36.744 "num_base_bdevs_operational": 3, 00:11:36.744 "base_bdevs_list": [ 00:11:36.744 { 00:11:36.744 "name": "BaseBdev1", 00:11:36.744 "uuid": "364f9952-8636-4dc2-b4a0-505327e580c7", 00:11:36.744 "is_configured": true, 00:11:36.744 "data_offset": 2048, 00:11:36.744 "data_size": 63488 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "name": "BaseBdev2", 00:11:36.744 "uuid": "cdc34d2f-8fb7-4ca7-a909-73e6883af897", 00:11:36.744 "is_configured": true, 00:11:36.744 "data_offset": 2048, 00:11:36.744 "data_size": 63488 00:11:36.744 }, 00:11:36.744 { 00:11:36.744 "name": "BaseBdev3", 00:11:36.744 "uuid": "3a6fff67-5128-4a95-8b6f-b609e50be5ab", 00:11:36.744 "is_configured": true, 00:11:36.744 "data_offset": 2048, 00:11:36.744 "data_size": 63488 00:11:36.744 } 00:11:36.744 ] 00:11:36.744 }' 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.744 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.003 [2024-09-27 22:28:32.845445] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.003 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.261 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.261 "name": "Existed_Raid", 00:11:37.261 "aliases": [ 00:11:37.261 "64569b8d-0d4c-4fd8-bf10-76341c4fc012" 00:11:37.261 ], 00:11:37.261 "product_name": "Raid Volume", 00:11:37.261 "block_size": 512, 00:11:37.261 "num_blocks": 190464, 00:11:37.261 "uuid": "64569b8d-0d4c-4fd8-bf10-76341c4fc012", 00:11:37.261 "assigned_rate_limits": { 00:11:37.261 "rw_ios_per_sec": 0, 00:11:37.261 "rw_mbytes_per_sec": 0, 00:11:37.261 "r_mbytes_per_sec": 0, 00:11:37.261 "w_mbytes_per_sec": 0 00:11:37.261 }, 00:11:37.261 "claimed": false, 00:11:37.261 "zoned": false, 00:11:37.261 "supported_io_types": { 00:11:37.261 "read": true, 00:11:37.261 "write": true, 00:11:37.261 "unmap": true, 00:11:37.261 "flush": true, 00:11:37.261 "reset": true, 00:11:37.261 "nvme_admin": false, 00:11:37.261 "nvme_io": false, 00:11:37.261 "nvme_io_md": false, 00:11:37.261 "write_zeroes": true, 00:11:37.261 "zcopy": false, 00:11:37.261 "get_zone_info": false, 00:11:37.261 "zone_management": false, 00:11:37.261 "zone_append": false, 00:11:37.261 "compare": false, 00:11:37.261 "compare_and_write": false, 00:11:37.262 "abort": false, 00:11:37.262 "seek_hole": false, 00:11:37.262 "seek_data": false, 00:11:37.262 "copy": false, 00:11:37.262 "nvme_iov_md": false 00:11:37.262 }, 00:11:37.262 "memory_domains": [ 00:11:37.262 { 00:11:37.262 "dma_device_id": "system", 00:11:37.262 "dma_device_type": 1 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.262 "dma_device_type": 2 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "dma_device_id": "system", 00:11:37.262 "dma_device_type": 1 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.262 "dma_device_type": 2 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "dma_device_id": "system", 00:11:37.262 "dma_device_type": 1 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.262 "dma_device_type": 2 00:11:37.262 } 00:11:37.262 ], 00:11:37.262 "driver_specific": { 00:11:37.262 "raid": { 00:11:37.262 "uuid": "64569b8d-0d4c-4fd8-bf10-76341c4fc012", 00:11:37.262 "strip_size_kb": 64, 00:11:37.262 "state": "online", 00:11:37.262 "raid_level": "concat", 00:11:37.262 "superblock": true, 00:11:37.262 "num_base_bdevs": 3, 00:11:37.262 "num_base_bdevs_discovered": 3, 00:11:37.262 "num_base_bdevs_operational": 3, 00:11:37.262 "base_bdevs_list": [ 00:11:37.262 { 00:11:37.262 "name": "BaseBdev1", 00:11:37.262 "uuid": "364f9952-8636-4dc2-b4a0-505327e580c7", 00:11:37.262 "is_configured": true, 00:11:37.262 "data_offset": 2048, 00:11:37.262 "data_size": 63488 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "name": "BaseBdev2", 00:11:37.262 "uuid": "cdc34d2f-8fb7-4ca7-a909-73e6883af897", 00:11:37.262 "is_configured": true, 00:11:37.262 "data_offset": 2048, 00:11:37.262 "data_size": 63488 00:11:37.262 }, 00:11:37.262 { 00:11:37.262 "name": "BaseBdev3", 00:11:37.262 "uuid": "3a6fff67-5128-4a95-8b6f-b609e50be5ab", 00:11:37.262 "is_configured": true, 00:11:37.262 "data_offset": 2048, 00:11:37.262 "data_size": 63488 00:11:37.262 } 00:11:37.262 ] 00:11:37.262 } 00:11:37.262 } 00:11:37.262 }' 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:37.262 BaseBdev2 00:11:37.262 BaseBdev3' 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.262 22:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.262 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.262 [2024-09-27 22:28:33.108769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.262 [2024-09-27 22:28:33.108808] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.262 [2024-09-27 22:28:33.108863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.521 "name": "Existed_Raid", 00:11:37.521 "uuid": "64569b8d-0d4c-4fd8-bf10-76341c4fc012", 00:11:37.521 "strip_size_kb": 64, 00:11:37.521 "state": "offline", 00:11:37.521 "raid_level": "concat", 00:11:37.521 "superblock": true, 00:11:37.521 "num_base_bdevs": 3, 00:11:37.521 "num_base_bdevs_discovered": 2, 00:11:37.521 "num_base_bdevs_operational": 2, 00:11:37.521 "base_bdevs_list": [ 00:11:37.521 { 00:11:37.521 "name": null, 00:11:37.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.521 "is_configured": false, 00:11:37.521 "data_offset": 0, 00:11:37.521 "data_size": 63488 00:11:37.521 }, 00:11:37.521 { 00:11:37.521 "name": "BaseBdev2", 00:11:37.521 "uuid": "cdc34d2f-8fb7-4ca7-a909-73e6883af897", 00:11:37.521 "is_configured": true, 00:11:37.521 "data_offset": 2048, 00:11:37.521 "data_size": 63488 00:11:37.521 }, 00:11:37.521 { 00:11:37.521 "name": "BaseBdev3", 00:11:37.521 "uuid": "3a6fff67-5128-4a95-8b6f-b609e50be5ab", 00:11:37.521 "is_configured": true, 00:11:37.521 "data_offset": 2048, 00:11:37.521 "data_size": 63488 00:11:37.521 } 00:11:37.521 ] 00:11:37.521 }' 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.521 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.112 [2024-09-27 22:28:33.719350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.112 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.113 [2024-09-27 22:28:33.868183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.113 [2024-09-27 22:28:33.868240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.113 22:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.372 BaseBdev2 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.372 [ 00:11:38.372 { 00:11:38.372 "name": "BaseBdev2", 00:11:38.372 "aliases": [ 00:11:38.372 "58d9cd6a-14dc-4c5e-a165-6d2d78c20665" 00:11:38.372 ], 00:11:38.372 "product_name": "Malloc disk", 00:11:38.372 "block_size": 512, 00:11:38.372 "num_blocks": 65536, 00:11:38.372 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:38.372 "assigned_rate_limits": { 00:11:38.372 "rw_ios_per_sec": 0, 00:11:38.372 "rw_mbytes_per_sec": 0, 00:11:38.372 "r_mbytes_per_sec": 0, 00:11:38.372 "w_mbytes_per_sec": 0 00:11:38.372 }, 00:11:38.372 "claimed": false, 00:11:38.372 "zoned": false, 00:11:38.372 "supported_io_types": { 00:11:38.372 "read": true, 00:11:38.372 "write": true, 00:11:38.372 "unmap": true, 00:11:38.372 "flush": true, 00:11:38.372 "reset": true, 00:11:38.372 "nvme_admin": false, 00:11:38.372 "nvme_io": false, 00:11:38.372 "nvme_io_md": false, 00:11:38.372 "write_zeroes": true, 00:11:38.372 "zcopy": true, 00:11:38.372 "get_zone_info": false, 00:11:38.372 "zone_management": false, 00:11:38.372 "zone_append": false, 00:11:38.372 "compare": false, 00:11:38.372 "compare_and_write": false, 00:11:38.372 "abort": true, 00:11:38.372 "seek_hole": false, 00:11:38.372 "seek_data": false, 00:11:38.372 "copy": true, 00:11:38.372 "nvme_iov_md": false 00:11:38.372 }, 00:11:38.372 "memory_domains": [ 00:11:38.372 { 00:11:38.372 "dma_device_id": "system", 00:11:38.372 "dma_device_type": 1 00:11:38.372 }, 00:11:38.372 { 00:11:38.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.372 "dma_device_type": 2 00:11:38.372 } 00:11:38.372 ], 00:11:38.372 "driver_specific": {} 00:11:38.372 } 00:11:38.372 ] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.372 BaseBdev3 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.372 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.372 [ 00:11:38.372 { 00:11:38.372 "name": "BaseBdev3", 00:11:38.372 "aliases": [ 00:11:38.372 "04e18673-aac1-4591-a64e-d38474628c2b" 00:11:38.372 ], 00:11:38.372 "product_name": "Malloc disk", 00:11:38.372 "block_size": 512, 00:11:38.372 "num_blocks": 65536, 00:11:38.372 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:38.372 "assigned_rate_limits": { 00:11:38.372 "rw_ios_per_sec": 0, 00:11:38.372 "rw_mbytes_per_sec": 0, 00:11:38.372 "r_mbytes_per_sec": 0, 00:11:38.372 "w_mbytes_per_sec": 0 00:11:38.372 }, 00:11:38.372 "claimed": false, 00:11:38.372 "zoned": false, 00:11:38.372 "supported_io_types": { 00:11:38.372 "read": true, 00:11:38.372 "write": true, 00:11:38.372 "unmap": true, 00:11:38.372 "flush": true, 00:11:38.372 "reset": true, 00:11:38.372 "nvme_admin": false, 00:11:38.372 "nvme_io": false, 00:11:38.372 "nvme_io_md": false, 00:11:38.372 "write_zeroes": true, 00:11:38.372 "zcopy": true, 00:11:38.372 "get_zone_info": false, 00:11:38.372 "zone_management": false, 00:11:38.372 "zone_append": false, 00:11:38.372 "compare": false, 00:11:38.372 "compare_and_write": false, 00:11:38.372 "abort": true, 00:11:38.372 "seek_hole": false, 00:11:38.372 "seek_data": false, 00:11:38.372 "copy": true, 00:11:38.372 "nvme_iov_md": false 00:11:38.372 }, 00:11:38.372 "memory_domains": [ 00:11:38.372 { 00:11:38.372 "dma_device_id": "system", 00:11:38.372 "dma_device_type": 1 00:11:38.373 }, 00:11:38.373 { 00:11:38.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.373 "dma_device_type": 2 00:11:38.373 } 00:11:38.373 ], 00:11:38.373 "driver_specific": {} 00:11:38.373 } 00:11:38.373 ] 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.373 [2024-09-27 22:28:34.198098] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.373 [2024-09-27 22:28:34.198147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.373 [2024-09-27 22:28:34.198175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.373 [2024-09-27 22:28:34.200311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.373 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.631 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.631 "name": "Existed_Raid", 00:11:38.632 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:38.632 "strip_size_kb": 64, 00:11:38.632 "state": "configuring", 00:11:38.632 "raid_level": "concat", 00:11:38.632 "superblock": true, 00:11:38.632 "num_base_bdevs": 3, 00:11:38.632 "num_base_bdevs_discovered": 2, 00:11:38.632 "num_base_bdevs_operational": 3, 00:11:38.632 "base_bdevs_list": [ 00:11:38.632 { 00:11:38.632 "name": "BaseBdev1", 00:11:38.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.632 "is_configured": false, 00:11:38.632 "data_offset": 0, 00:11:38.632 "data_size": 0 00:11:38.632 }, 00:11:38.632 { 00:11:38.632 "name": "BaseBdev2", 00:11:38.632 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:38.632 "is_configured": true, 00:11:38.632 "data_offset": 2048, 00:11:38.632 "data_size": 63488 00:11:38.632 }, 00:11:38.632 { 00:11:38.632 "name": "BaseBdev3", 00:11:38.632 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:38.632 "is_configured": true, 00:11:38.632 "data_offset": 2048, 00:11:38.632 "data_size": 63488 00:11:38.632 } 00:11:38.632 ] 00:11:38.632 }' 00:11:38.632 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.632 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.890 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.890 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.890 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.890 [2024-09-27 22:28:34.613422] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.891 "name": "Existed_Raid", 00:11:38.891 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:38.891 "strip_size_kb": 64, 00:11:38.891 "state": "configuring", 00:11:38.891 "raid_level": "concat", 00:11:38.891 "superblock": true, 00:11:38.891 "num_base_bdevs": 3, 00:11:38.891 "num_base_bdevs_discovered": 1, 00:11:38.891 "num_base_bdevs_operational": 3, 00:11:38.891 "base_bdevs_list": [ 00:11:38.891 { 00:11:38.891 "name": "BaseBdev1", 00:11:38.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.891 "is_configured": false, 00:11:38.891 "data_offset": 0, 00:11:38.891 "data_size": 0 00:11:38.891 }, 00:11:38.891 { 00:11:38.891 "name": null, 00:11:38.891 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:38.891 "is_configured": false, 00:11:38.891 "data_offset": 0, 00:11:38.891 "data_size": 63488 00:11:38.891 }, 00:11:38.891 { 00:11:38.891 "name": "BaseBdev3", 00:11:38.891 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:38.891 "is_configured": true, 00:11:38.891 "data_offset": 2048, 00:11:38.891 "data_size": 63488 00:11:38.891 } 00:11:38.891 ] 00:11:38.891 }' 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.891 22:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.150 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.150 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.150 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.150 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.409 [2024-09-27 22:28:35.090592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.409 BaseBdev1 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.409 [ 00:11:39.409 { 00:11:39.409 "name": "BaseBdev1", 00:11:39.409 "aliases": [ 00:11:39.409 "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4" 00:11:39.409 ], 00:11:39.409 "product_name": "Malloc disk", 00:11:39.409 "block_size": 512, 00:11:39.409 "num_blocks": 65536, 00:11:39.409 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:39.409 "assigned_rate_limits": { 00:11:39.409 "rw_ios_per_sec": 0, 00:11:39.409 "rw_mbytes_per_sec": 0, 00:11:39.409 "r_mbytes_per_sec": 0, 00:11:39.409 "w_mbytes_per_sec": 0 00:11:39.409 }, 00:11:39.409 "claimed": true, 00:11:39.409 "claim_type": "exclusive_write", 00:11:39.409 "zoned": false, 00:11:39.409 "supported_io_types": { 00:11:39.409 "read": true, 00:11:39.409 "write": true, 00:11:39.409 "unmap": true, 00:11:39.409 "flush": true, 00:11:39.409 "reset": true, 00:11:39.409 "nvme_admin": false, 00:11:39.409 "nvme_io": false, 00:11:39.409 "nvme_io_md": false, 00:11:39.409 "write_zeroes": true, 00:11:39.409 "zcopy": true, 00:11:39.409 "get_zone_info": false, 00:11:39.409 "zone_management": false, 00:11:39.409 "zone_append": false, 00:11:39.409 "compare": false, 00:11:39.409 "compare_and_write": false, 00:11:39.409 "abort": true, 00:11:39.409 "seek_hole": false, 00:11:39.409 "seek_data": false, 00:11:39.409 "copy": true, 00:11:39.409 "nvme_iov_md": false 00:11:39.409 }, 00:11:39.409 "memory_domains": [ 00:11:39.409 { 00:11:39.409 "dma_device_id": "system", 00:11:39.409 "dma_device_type": 1 00:11:39.409 }, 00:11:39.409 { 00:11:39.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.409 "dma_device_type": 2 00:11:39.409 } 00:11:39.409 ], 00:11:39.409 "driver_specific": {} 00:11:39.409 } 00:11:39.409 ] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.409 "name": "Existed_Raid", 00:11:39.409 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:39.409 "strip_size_kb": 64, 00:11:39.409 "state": "configuring", 00:11:39.409 "raid_level": "concat", 00:11:39.409 "superblock": true, 00:11:39.409 "num_base_bdevs": 3, 00:11:39.409 "num_base_bdevs_discovered": 2, 00:11:39.409 "num_base_bdevs_operational": 3, 00:11:39.409 "base_bdevs_list": [ 00:11:39.409 { 00:11:39.409 "name": "BaseBdev1", 00:11:39.409 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:39.409 "is_configured": true, 00:11:39.409 "data_offset": 2048, 00:11:39.409 "data_size": 63488 00:11:39.409 }, 00:11:39.409 { 00:11:39.409 "name": null, 00:11:39.409 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:39.409 "is_configured": false, 00:11:39.409 "data_offset": 0, 00:11:39.409 "data_size": 63488 00:11:39.409 }, 00:11:39.409 { 00:11:39.409 "name": "BaseBdev3", 00:11:39.409 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:39.409 "is_configured": true, 00:11:39.409 "data_offset": 2048, 00:11:39.409 "data_size": 63488 00:11:39.409 } 00:11:39.409 ] 00:11:39.409 }' 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.409 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.668 [2024-09-27 22:28:35.522123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.668 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.926 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.926 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.926 "name": "Existed_Raid", 00:11:39.926 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:39.926 "strip_size_kb": 64, 00:11:39.926 "state": "configuring", 00:11:39.926 "raid_level": "concat", 00:11:39.926 "superblock": true, 00:11:39.926 "num_base_bdevs": 3, 00:11:39.926 "num_base_bdevs_discovered": 1, 00:11:39.926 "num_base_bdevs_operational": 3, 00:11:39.926 "base_bdevs_list": [ 00:11:39.926 { 00:11:39.926 "name": "BaseBdev1", 00:11:39.926 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:39.926 "is_configured": true, 00:11:39.926 "data_offset": 2048, 00:11:39.926 "data_size": 63488 00:11:39.926 }, 00:11:39.926 { 00:11:39.926 "name": null, 00:11:39.926 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:39.926 "is_configured": false, 00:11:39.926 "data_offset": 0, 00:11:39.926 "data_size": 63488 00:11:39.926 }, 00:11:39.926 { 00:11:39.926 "name": null, 00:11:39.926 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:39.926 "is_configured": false, 00:11:39.926 "data_offset": 0, 00:11:39.926 "data_size": 63488 00:11:39.926 } 00:11:39.926 ] 00:11:39.926 }' 00:11:39.926 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.926 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 [2024-09-27 22:28:35.977581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.184 22:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.184 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.184 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.184 "name": "Existed_Raid", 00:11:40.184 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:40.184 "strip_size_kb": 64, 00:11:40.184 "state": "configuring", 00:11:40.184 "raid_level": "concat", 00:11:40.184 "superblock": true, 00:11:40.184 "num_base_bdevs": 3, 00:11:40.184 "num_base_bdevs_discovered": 2, 00:11:40.184 "num_base_bdevs_operational": 3, 00:11:40.184 "base_bdevs_list": [ 00:11:40.184 { 00:11:40.184 "name": "BaseBdev1", 00:11:40.184 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:40.184 "is_configured": true, 00:11:40.184 "data_offset": 2048, 00:11:40.184 "data_size": 63488 00:11:40.184 }, 00:11:40.184 { 00:11:40.184 "name": null, 00:11:40.184 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:40.184 "is_configured": false, 00:11:40.184 "data_offset": 0, 00:11:40.184 "data_size": 63488 00:11:40.184 }, 00:11:40.184 { 00:11:40.184 "name": "BaseBdev3", 00:11:40.184 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:40.184 "is_configured": true, 00:11:40.184 "data_offset": 2048, 00:11:40.184 "data_size": 63488 00:11:40.184 } 00:11:40.184 ] 00:11:40.184 }' 00:11:40.184 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.184 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.767 [2024-09-27 22:28:36.421190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.767 "name": "Existed_Raid", 00:11:40.767 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:40.767 "strip_size_kb": 64, 00:11:40.767 "state": "configuring", 00:11:40.767 "raid_level": "concat", 00:11:40.767 "superblock": true, 00:11:40.767 "num_base_bdevs": 3, 00:11:40.767 "num_base_bdevs_discovered": 1, 00:11:40.767 "num_base_bdevs_operational": 3, 00:11:40.767 "base_bdevs_list": [ 00:11:40.767 { 00:11:40.767 "name": null, 00:11:40.767 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:40.767 "is_configured": false, 00:11:40.767 "data_offset": 0, 00:11:40.767 "data_size": 63488 00:11:40.767 }, 00:11:40.767 { 00:11:40.767 "name": null, 00:11:40.767 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:40.767 "is_configured": false, 00:11:40.767 "data_offset": 0, 00:11:40.767 "data_size": 63488 00:11:40.767 }, 00:11:40.767 { 00:11:40.767 "name": "BaseBdev3", 00:11:40.767 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:40.767 "is_configured": true, 00:11:40.767 "data_offset": 2048, 00:11:40.767 "data_size": 63488 00:11:40.767 } 00:11:40.767 ] 00:11:40.767 }' 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.767 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.026 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.026 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.026 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.285 [2024-09-27 22:28:36.951151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.285 22:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.285 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.285 "name": "Existed_Raid", 00:11:41.285 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:41.285 "strip_size_kb": 64, 00:11:41.285 "state": "configuring", 00:11:41.285 "raid_level": "concat", 00:11:41.285 "superblock": true, 00:11:41.285 "num_base_bdevs": 3, 00:11:41.285 "num_base_bdevs_discovered": 2, 00:11:41.285 "num_base_bdevs_operational": 3, 00:11:41.285 "base_bdevs_list": [ 00:11:41.285 { 00:11:41.285 "name": null, 00:11:41.285 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:41.285 "is_configured": false, 00:11:41.285 "data_offset": 0, 00:11:41.285 "data_size": 63488 00:11:41.285 }, 00:11:41.285 { 00:11:41.285 "name": "BaseBdev2", 00:11:41.285 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:41.285 "is_configured": true, 00:11:41.285 "data_offset": 2048, 00:11:41.285 "data_size": 63488 00:11:41.285 }, 00:11:41.285 { 00:11:41.285 "name": "BaseBdev3", 00:11:41.285 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:41.285 "is_configured": true, 00:11:41.285 "data_offset": 2048, 00:11:41.285 "data_size": 63488 00:11:41.285 } 00:11:41.285 ] 00:11:41.285 }' 00:11:41.285 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.285 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.544 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.544 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.544 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.544 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.544 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.802 [2024-09-27 22:28:37.512180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.802 [2024-09-27 22:28:37.512686] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:41.802 [2024-09-27 22:28:37.512713] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:41.802 [2024-09-27 22:28:37.513016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:41.802 [2024-09-27 22:28:37.513160] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:41.802 [2024-09-27 22:28:37.513170] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:41.802 NewBaseBdev 00:11:41.802 [2024-09-27 22:28:37.513297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.802 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.802 [ 00:11:41.802 { 00:11:41.802 "name": "NewBaseBdev", 00:11:41.802 "aliases": [ 00:11:41.802 "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4" 00:11:41.802 ], 00:11:41.803 "product_name": "Malloc disk", 00:11:41.803 "block_size": 512, 00:11:41.803 "num_blocks": 65536, 00:11:41.803 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:41.803 "assigned_rate_limits": { 00:11:41.803 "rw_ios_per_sec": 0, 00:11:41.803 "rw_mbytes_per_sec": 0, 00:11:41.803 "r_mbytes_per_sec": 0, 00:11:41.803 "w_mbytes_per_sec": 0 00:11:41.803 }, 00:11:41.803 "claimed": true, 00:11:41.803 "claim_type": "exclusive_write", 00:11:41.803 "zoned": false, 00:11:41.803 "supported_io_types": { 00:11:41.803 "read": true, 00:11:41.803 "write": true, 00:11:41.803 "unmap": true, 00:11:41.803 "flush": true, 00:11:41.803 "reset": true, 00:11:41.803 "nvme_admin": false, 00:11:41.803 "nvme_io": false, 00:11:41.803 "nvme_io_md": false, 00:11:41.803 "write_zeroes": true, 00:11:41.803 "zcopy": true, 00:11:41.803 "get_zone_info": false, 00:11:41.803 "zone_management": false, 00:11:41.803 "zone_append": false, 00:11:41.803 "compare": false, 00:11:41.803 "compare_and_write": false, 00:11:41.803 "abort": true, 00:11:41.803 "seek_hole": false, 00:11:41.803 "seek_data": false, 00:11:41.803 "copy": true, 00:11:41.803 "nvme_iov_md": false 00:11:41.803 }, 00:11:41.803 "memory_domains": [ 00:11:41.803 { 00:11:41.803 "dma_device_id": "system", 00:11:41.803 "dma_device_type": 1 00:11:41.803 }, 00:11:41.803 { 00:11:41.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.803 "dma_device_type": 2 00:11:41.803 } 00:11:41.803 ], 00:11:41.803 "driver_specific": {} 00:11:41.803 } 00:11:41.803 ] 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.803 "name": "Existed_Raid", 00:11:41.803 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:41.803 "strip_size_kb": 64, 00:11:41.803 "state": "online", 00:11:41.803 "raid_level": "concat", 00:11:41.803 "superblock": true, 00:11:41.803 "num_base_bdevs": 3, 00:11:41.803 "num_base_bdevs_discovered": 3, 00:11:41.803 "num_base_bdevs_operational": 3, 00:11:41.803 "base_bdevs_list": [ 00:11:41.803 { 00:11:41.803 "name": "NewBaseBdev", 00:11:41.803 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:41.803 "is_configured": true, 00:11:41.803 "data_offset": 2048, 00:11:41.803 "data_size": 63488 00:11:41.803 }, 00:11:41.803 { 00:11:41.803 "name": "BaseBdev2", 00:11:41.803 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:41.803 "is_configured": true, 00:11:41.803 "data_offset": 2048, 00:11:41.803 "data_size": 63488 00:11:41.803 }, 00:11:41.803 { 00:11:41.803 "name": "BaseBdev3", 00:11:41.803 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:41.803 "is_configured": true, 00:11:41.803 "data_offset": 2048, 00:11:41.803 "data_size": 63488 00:11:41.803 } 00:11:41.803 ] 00:11:41.803 }' 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.803 22:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.370 22:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.370 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.370 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.370 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.370 [2024-09-27 22:28:38.008396] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.370 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.370 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.370 "name": "Existed_Raid", 00:11:42.370 "aliases": [ 00:11:42.370 "6b47b120-9791-4725-9ee1-d5bcb57df642" 00:11:42.370 ], 00:11:42.370 "product_name": "Raid Volume", 00:11:42.370 "block_size": 512, 00:11:42.370 "num_blocks": 190464, 00:11:42.370 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:42.370 "assigned_rate_limits": { 00:11:42.370 "rw_ios_per_sec": 0, 00:11:42.370 "rw_mbytes_per_sec": 0, 00:11:42.370 "r_mbytes_per_sec": 0, 00:11:42.370 "w_mbytes_per_sec": 0 00:11:42.370 }, 00:11:42.370 "claimed": false, 00:11:42.370 "zoned": false, 00:11:42.370 "supported_io_types": { 00:11:42.370 "read": true, 00:11:42.370 "write": true, 00:11:42.370 "unmap": true, 00:11:42.370 "flush": true, 00:11:42.370 "reset": true, 00:11:42.370 "nvme_admin": false, 00:11:42.370 "nvme_io": false, 00:11:42.370 "nvme_io_md": false, 00:11:42.370 "write_zeroes": true, 00:11:42.370 "zcopy": false, 00:11:42.370 "get_zone_info": false, 00:11:42.371 "zone_management": false, 00:11:42.371 "zone_append": false, 00:11:42.371 "compare": false, 00:11:42.371 "compare_and_write": false, 00:11:42.371 "abort": false, 00:11:42.371 "seek_hole": false, 00:11:42.371 "seek_data": false, 00:11:42.371 "copy": false, 00:11:42.371 "nvme_iov_md": false 00:11:42.371 }, 00:11:42.371 "memory_domains": [ 00:11:42.371 { 00:11:42.371 "dma_device_id": "system", 00:11:42.371 "dma_device_type": 1 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.371 "dma_device_type": 2 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "dma_device_id": "system", 00:11:42.371 "dma_device_type": 1 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.371 "dma_device_type": 2 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "dma_device_id": "system", 00:11:42.371 "dma_device_type": 1 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.371 "dma_device_type": 2 00:11:42.371 } 00:11:42.371 ], 00:11:42.371 "driver_specific": { 00:11:42.371 "raid": { 00:11:42.371 "uuid": "6b47b120-9791-4725-9ee1-d5bcb57df642", 00:11:42.371 "strip_size_kb": 64, 00:11:42.371 "state": "online", 00:11:42.371 "raid_level": "concat", 00:11:42.371 "superblock": true, 00:11:42.371 "num_base_bdevs": 3, 00:11:42.371 "num_base_bdevs_discovered": 3, 00:11:42.371 "num_base_bdevs_operational": 3, 00:11:42.371 "base_bdevs_list": [ 00:11:42.371 { 00:11:42.371 "name": "NewBaseBdev", 00:11:42.371 "uuid": "27701ea9-cd4a-4f4e-892e-a8a06ac0d6c4", 00:11:42.371 "is_configured": true, 00:11:42.371 "data_offset": 2048, 00:11:42.371 "data_size": 63488 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "name": "BaseBdev2", 00:11:42.371 "uuid": "58d9cd6a-14dc-4c5e-a165-6d2d78c20665", 00:11:42.371 "is_configured": true, 00:11:42.371 "data_offset": 2048, 00:11:42.371 "data_size": 63488 00:11:42.371 }, 00:11:42.371 { 00:11:42.371 "name": "BaseBdev3", 00:11:42.371 "uuid": "04e18673-aac1-4591-a64e-d38474628c2b", 00:11:42.371 "is_configured": true, 00:11:42.371 "data_offset": 2048, 00:11:42.371 "data_size": 63488 00:11:42.371 } 00:11:42.371 ] 00:11:42.371 } 00:11:42.371 } 00:11:42.371 }' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.371 BaseBdev2 00:11:42.371 BaseBdev3' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.371 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.630 [2024-09-27 22:28:38.272093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.630 [2024-09-27 22:28:38.272123] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.630 [2024-09-27 22:28:38.272207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.630 [2024-09-27 22:28:38.272263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.630 [2024-09-27 22:28:38.272277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66769 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66769 ']' 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66769 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66769 00:11:42.630 killing process with pid 66769 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.630 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.631 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66769' 00:11:42.631 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66769 00:11:42.631 [2024-09-27 22:28:38.326739] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.631 22:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66769 00:11:42.890 [2024-09-27 22:28:38.627662] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.830 22:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.830 00:11:44.830 real 0m11.506s 00:11:44.830 user 0m17.442s 00:11:44.830 sys 0m2.191s 00:11:44.830 ************************************ 00:11:44.830 END TEST raid_state_function_test_sb 00:11:44.830 ************************************ 00:11:44.830 22:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.830 22:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.088 22:28:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:45.088 22:28:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:45.088 22:28:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.088 22:28:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.088 ************************************ 00:11:45.088 START TEST raid_superblock_test 00:11:45.088 ************************************ 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67400 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67400 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 67400 ']' 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.088 22:28:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.088 [2024-09-27 22:28:40.824880] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:45.088 [2024-09-27 22:28:40.825041] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67400 ] 00:11:45.347 [2024-09-27 22:28:40.997508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.606 [2024-09-27 22:28:41.235371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.606 [2024-09-27 22:28:41.477626] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.606 [2024-09-27 22:28:41.477660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.174 malloc1 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.174 22:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.174 [2024-09-27 22:28:42.001935] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.174 [2024-09-27 22:28:42.002154] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.174 [2024-09-27 22:28:42.002221] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:46.174 [2024-09-27 22:28:42.002312] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.174 [2024-09-27 22:28:42.004758] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.174 [2024-09-27 22:28:42.004907] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.174 pt1 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.174 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 malloc2 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 [2024-09-27 22:28:42.062309] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.433 [2024-09-27 22:28:42.062494] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.433 [2024-09-27 22:28:42.062532] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:46.433 [2024-09-27 22:28:42.062545] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.433 [2024-09-27 22:28:42.064976] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.433 [2024-09-27 22:28:42.065031] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.433 pt2 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 malloc3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 [2024-09-27 22:28:42.124977] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.433 [2024-09-27 22:28:42.125172] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.433 [2024-09-27 22:28:42.125232] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:46.433 [2024-09-27 22:28:42.125352] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.433 [2024-09-27 22:28:42.127783] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.433 [2024-09-27 22:28:42.127925] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.433 pt3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 [2024-09-27 22:28:42.137054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.433 [2024-09-27 22:28:42.139224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.433 [2024-09-27 22:28:42.139446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.433 [2024-09-27 22:28:42.139636] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:46.433 [2024-09-27 22:28:42.139653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:46.433 [2024-09-27 22:28:42.139963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.433 [2024-09-27 22:28:42.140188] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:46.433 [2024-09-27 22:28:42.140200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:46.433 [2024-09-27 22:28:42.140386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.433 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.434 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.434 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.434 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.434 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.434 "name": "raid_bdev1", 00:11:46.434 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:46.434 "strip_size_kb": 64, 00:11:46.434 "state": "online", 00:11:46.434 "raid_level": "concat", 00:11:46.434 "superblock": true, 00:11:46.434 "num_base_bdevs": 3, 00:11:46.434 "num_base_bdevs_discovered": 3, 00:11:46.434 "num_base_bdevs_operational": 3, 00:11:46.434 "base_bdevs_list": [ 00:11:46.434 { 00:11:46.434 "name": "pt1", 00:11:46.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.434 "is_configured": true, 00:11:46.434 "data_offset": 2048, 00:11:46.434 "data_size": 63488 00:11:46.434 }, 00:11:46.434 { 00:11:46.434 "name": "pt2", 00:11:46.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.434 "is_configured": true, 00:11:46.434 "data_offset": 2048, 00:11:46.434 "data_size": 63488 00:11:46.434 }, 00:11:46.434 { 00:11:46.434 "name": "pt3", 00:11:46.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.434 "is_configured": true, 00:11:46.434 "data_offset": 2048, 00:11:46.434 "data_size": 63488 00:11:46.434 } 00:11:46.434 ] 00:11:46.434 }' 00:11:46.434 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.434 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.690 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.690 [2024-09-27 22:28:42.536738] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.949 "name": "raid_bdev1", 00:11:46.949 "aliases": [ 00:11:46.949 "30494354-fe87-4936-b19c-9d5a6ba36661" 00:11:46.949 ], 00:11:46.949 "product_name": "Raid Volume", 00:11:46.949 "block_size": 512, 00:11:46.949 "num_blocks": 190464, 00:11:46.949 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:46.949 "assigned_rate_limits": { 00:11:46.949 "rw_ios_per_sec": 0, 00:11:46.949 "rw_mbytes_per_sec": 0, 00:11:46.949 "r_mbytes_per_sec": 0, 00:11:46.949 "w_mbytes_per_sec": 0 00:11:46.949 }, 00:11:46.949 "claimed": false, 00:11:46.949 "zoned": false, 00:11:46.949 "supported_io_types": { 00:11:46.949 "read": true, 00:11:46.949 "write": true, 00:11:46.949 "unmap": true, 00:11:46.949 "flush": true, 00:11:46.949 "reset": true, 00:11:46.949 "nvme_admin": false, 00:11:46.949 "nvme_io": false, 00:11:46.949 "nvme_io_md": false, 00:11:46.949 "write_zeroes": true, 00:11:46.949 "zcopy": false, 00:11:46.949 "get_zone_info": false, 00:11:46.949 "zone_management": false, 00:11:46.949 "zone_append": false, 00:11:46.949 "compare": false, 00:11:46.949 "compare_and_write": false, 00:11:46.949 "abort": false, 00:11:46.949 "seek_hole": false, 00:11:46.949 "seek_data": false, 00:11:46.949 "copy": false, 00:11:46.949 "nvme_iov_md": false 00:11:46.949 }, 00:11:46.949 "memory_domains": [ 00:11:46.949 { 00:11:46.949 "dma_device_id": "system", 00:11:46.949 "dma_device_type": 1 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.949 "dma_device_type": 2 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "dma_device_id": "system", 00:11:46.949 "dma_device_type": 1 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.949 "dma_device_type": 2 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "dma_device_id": "system", 00:11:46.949 "dma_device_type": 1 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.949 "dma_device_type": 2 00:11:46.949 } 00:11:46.949 ], 00:11:46.949 "driver_specific": { 00:11:46.949 "raid": { 00:11:46.949 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:46.949 "strip_size_kb": 64, 00:11:46.949 "state": "online", 00:11:46.949 "raid_level": "concat", 00:11:46.949 "superblock": true, 00:11:46.949 "num_base_bdevs": 3, 00:11:46.949 "num_base_bdevs_discovered": 3, 00:11:46.949 "num_base_bdevs_operational": 3, 00:11:46.949 "base_bdevs_list": [ 00:11:46.949 { 00:11:46.949 "name": "pt1", 00:11:46.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.949 "is_configured": true, 00:11:46.949 "data_offset": 2048, 00:11:46.949 "data_size": 63488 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "name": "pt2", 00:11:46.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.949 "is_configured": true, 00:11:46.949 "data_offset": 2048, 00:11:46.949 "data_size": 63488 00:11:46.949 }, 00:11:46.949 { 00:11:46.949 "name": "pt3", 00:11:46.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.949 "is_configured": true, 00:11:46.949 "data_offset": 2048, 00:11:46.949 "data_size": 63488 00:11:46.949 } 00:11:46.949 ] 00:11:46.949 } 00:11:46.949 } 00:11:46.949 }' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.949 pt2 00:11:46.949 pt3' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.949 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.949 [2024-09-27 22:28:42.824299] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=30494354-fe87-4936-b19c-9d5a6ba36661 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 30494354-fe87-4936-b19c-9d5a6ba36661 ']' 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.209 [2024-09-27 22:28:42.871930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.209 [2024-09-27 22:28:42.872102] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.209 [2024-09-27 22:28:42.872203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.209 [2024-09-27 22:28:42.872269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.209 [2024-09-27 22:28:42.872285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:47.209 22:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.209 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.209 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:47.209 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.209 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.210 [2024-09-27 22:28:43.023780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:47.210 [2024-09-27 22:28:43.026337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:47.210 [2024-09-27 22:28:43.026536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:47.210 [2024-09-27 22:28:43.026603] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:47.210 [2024-09-27 22:28:43.026661] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:47.210 [2024-09-27 22:28:43.026685] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:47.210 [2024-09-27 22:28:43.026708] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.210 [2024-09-27 22:28:43.026719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:47.210 request: 00:11:47.210 { 00:11:47.210 "name": "raid_bdev1", 00:11:47.210 "raid_level": "concat", 00:11:47.210 "base_bdevs": [ 00:11:47.210 "malloc1", 00:11:47.210 "malloc2", 00:11:47.210 "malloc3" 00:11:47.210 ], 00:11:47.210 "strip_size_kb": 64, 00:11:47.210 "superblock": false, 00:11:47.210 "method": "bdev_raid_create", 00:11:47.210 "req_id": 1 00:11:47.210 } 00:11:47.210 Got JSON-RPC error response 00:11:47.210 response: 00:11:47.210 { 00:11:47.210 "code": -17, 00:11:47.210 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:47.210 } 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.210 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.469 [2024-09-27 22:28:43.095622] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.469 [2024-09-27 22:28:43.095834] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.469 [2024-09-27 22:28:43.095896] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:47.469 [2024-09-27 22:28:43.096023] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.469 [2024-09-27 22:28:43.098558] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.469 pt1 00:11:47.469 [2024-09-27 22:28:43.098695] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.469 [2024-09-27 22:28:43.098803] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:47.469 [2024-09-27 22:28:43.098865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.469 "name": "raid_bdev1", 00:11:47.469 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:47.469 "strip_size_kb": 64, 00:11:47.469 "state": "configuring", 00:11:47.469 "raid_level": "concat", 00:11:47.469 "superblock": true, 00:11:47.469 "num_base_bdevs": 3, 00:11:47.469 "num_base_bdevs_discovered": 1, 00:11:47.469 "num_base_bdevs_operational": 3, 00:11:47.469 "base_bdevs_list": [ 00:11:47.469 { 00:11:47.469 "name": "pt1", 00:11:47.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.469 "is_configured": true, 00:11:47.469 "data_offset": 2048, 00:11:47.469 "data_size": 63488 00:11:47.469 }, 00:11:47.469 { 00:11:47.469 "name": null, 00:11:47.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.469 "is_configured": false, 00:11:47.469 "data_offset": 2048, 00:11:47.469 "data_size": 63488 00:11:47.469 }, 00:11:47.469 { 00:11:47.469 "name": null, 00:11:47.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.469 "is_configured": false, 00:11:47.469 "data_offset": 2048, 00:11:47.469 "data_size": 63488 00:11:47.469 } 00:11:47.469 ] 00:11:47.469 }' 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.469 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.728 [2024-09-27 22:28:43.527313] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.728 [2024-09-27 22:28:43.527517] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.728 [2024-09-27 22:28:43.527586] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:47.728 [2024-09-27 22:28:43.527714] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.728 [2024-09-27 22:28:43.528226] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.728 [2024-09-27 22:28:43.528256] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.728 [2024-09-27 22:28:43.528351] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.728 [2024-09-27 22:28:43.528375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.728 pt2 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.728 [2024-09-27 22:28:43.539321] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.728 "name": "raid_bdev1", 00:11:47.728 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:47.728 "strip_size_kb": 64, 00:11:47.728 "state": "configuring", 00:11:47.728 "raid_level": "concat", 00:11:47.728 "superblock": true, 00:11:47.728 "num_base_bdevs": 3, 00:11:47.728 "num_base_bdevs_discovered": 1, 00:11:47.728 "num_base_bdevs_operational": 3, 00:11:47.728 "base_bdevs_list": [ 00:11:47.728 { 00:11:47.728 "name": "pt1", 00:11:47.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.728 "is_configured": true, 00:11:47.728 "data_offset": 2048, 00:11:47.728 "data_size": 63488 00:11:47.728 }, 00:11:47.728 { 00:11:47.728 "name": null, 00:11:47.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.728 "is_configured": false, 00:11:47.728 "data_offset": 0, 00:11:47.728 "data_size": 63488 00:11:47.728 }, 00:11:47.728 { 00:11:47.728 "name": null, 00:11:47.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.728 "is_configured": false, 00:11:47.728 "data_offset": 2048, 00:11:47.728 "data_size": 63488 00:11:47.728 } 00:11:47.728 ] 00:11:47.728 }' 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.728 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.294 [2024-09-27 22:28:43.955283] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.294 [2024-09-27 22:28:43.955495] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.294 [2024-09-27 22:28:43.955526] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:48.294 [2024-09-27 22:28:43.955542] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.294 [2024-09-27 22:28:43.956077] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.294 [2024-09-27 22:28:43.956104] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.294 [2024-09-27 22:28:43.956194] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.294 [2024-09-27 22:28:43.956238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.294 pt2 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.294 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.294 [2024-09-27 22:28:43.967305] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.294 [2024-09-27 22:28:43.967497] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.294 [2024-09-27 22:28:43.967552] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.294 [2024-09-27 22:28:43.967642] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.294 [2024-09-27 22:28:43.968115] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.294 [2024-09-27 22:28:43.968145] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.294 [2024-09-27 22:28:43.968239] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:48.294 [2024-09-27 22:28:43.968270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.294 [2024-09-27 22:28:43.968391] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.294 [2024-09-27 22:28:43.968406] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:48.294 [2024-09-27 22:28:43.968703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.294 [2024-09-27 22:28:43.968861] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.294 [2024-09-27 22:28:43.968871] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:48.294 [2024-09-27 22:28:43.969033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.294 pt3 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.295 22:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.295 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.295 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.295 "name": "raid_bdev1", 00:11:48.295 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:48.295 "strip_size_kb": 64, 00:11:48.295 "state": "online", 00:11:48.295 "raid_level": "concat", 00:11:48.295 "superblock": true, 00:11:48.295 "num_base_bdevs": 3, 00:11:48.295 "num_base_bdevs_discovered": 3, 00:11:48.295 "num_base_bdevs_operational": 3, 00:11:48.295 "base_bdevs_list": [ 00:11:48.295 { 00:11:48.295 "name": "pt1", 00:11:48.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.295 "is_configured": true, 00:11:48.295 "data_offset": 2048, 00:11:48.295 "data_size": 63488 00:11:48.295 }, 00:11:48.295 { 00:11:48.295 "name": "pt2", 00:11:48.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.295 "is_configured": true, 00:11:48.295 "data_offset": 2048, 00:11:48.295 "data_size": 63488 00:11:48.295 }, 00:11:48.295 { 00:11:48.295 "name": "pt3", 00:11:48.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.295 "is_configured": true, 00:11:48.295 "data_offset": 2048, 00:11:48.295 "data_size": 63488 00:11:48.295 } 00:11:48.295 ] 00:11:48.295 }' 00:11:48.295 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.295 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.553 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.553 [2024-09-27 22:28:44.427601] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.811 "name": "raid_bdev1", 00:11:48.811 "aliases": [ 00:11:48.811 "30494354-fe87-4936-b19c-9d5a6ba36661" 00:11:48.811 ], 00:11:48.811 "product_name": "Raid Volume", 00:11:48.811 "block_size": 512, 00:11:48.811 "num_blocks": 190464, 00:11:48.811 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:48.811 "assigned_rate_limits": { 00:11:48.811 "rw_ios_per_sec": 0, 00:11:48.811 "rw_mbytes_per_sec": 0, 00:11:48.811 "r_mbytes_per_sec": 0, 00:11:48.811 "w_mbytes_per_sec": 0 00:11:48.811 }, 00:11:48.811 "claimed": false, 00:11:48.811 "zoned": false, 00:11:48.811 "supported_io_types": { 00:11:48.811 "read": true, 00:11:48.811 "write": true, 00:11:48.811 "unmap": true, 00:11:48.811 "flush": true, 00:11:48.811 "reset": true, 00:11:48.811 "nvme_admin": false, 00:11:48.811 "nvme_io": false, 00:11:48.811 "nvme_io_md": false, 00:11:48.811 "write_zeroes": true, 00:11:48.811 "zcopy": false, 00:11:48.811 "get_zone_info": false, 00:11:48.811 "zone_management": false, 00:11:48.811 "zone_append": false, 00:11:48.811 "compare": false, 00:11:48.811 "compare_and_write": false, 00:11:48.811 "abort": false, 00:11:48.811 "seek_hole": false, 00:11:48.811 "seek_data": false, 00:11:48.811 "copy": false, 00:11:48.811 "nvme_iov_md": false 00:11:48.811 }, 00:11:48.811 "memory_domains": [ 00:11:48.811 { 00:11:48.811 "dma_device_id": "system", 00:11:48.811 "dma_device_type": 1 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.811 "dma_device_type": 2 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "dma_device_id": "system", 00:11:48.811 "dma_device_type": 1 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.811 "dma_device_type": 2 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "dma_device_id": "system", 00:11:48.811 "dma_device_type": 1 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.811 "dma_device_type": 2 00:11:48.811 } 00:11:48.811 ], 00:11:48.811 "driver_specific": { 00:11:48.811 "raid": { 00:11:48.811 "uuid": "30494354-fe87-4936-b19c-9d5a6ba36661", 00:11:48.811 "strip_size_kb": 64, 00:11:48.811 "state": "online", 00:11:48.811 "raid_level": "concat", 00:11:48.811 "superblock": true, 00:11:48.811 "num_base_bdevs": 3, 00:11:48.811 "num_base_bdevs_discovered": 3, 00:11:48.811 "num_base_bdevs_operational": 3, 00:11:48.811 "base_bdevs_list": [ 00:11:48.811 { 00:11:48.811 "name": "pt1", 00:11:48.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.811 "is_configured": true, 00:11:48.811 "data_offset": 2048, 00:11:48.811 "data_size": 63488 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "name": "pt2", 00:11:48.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.811 "is_configured": true, 00:11:48.811 "data_offset": 2048, 00:11:48.811 "data_size": 63488 00:11:48.811 }, 00:11:48.811 { 00:11:48.811 "name": "pt3", 00:11:48.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.811 "is_configured": true, 00:11:48.811 "data_offset": 2048, 00:11:48.811 "data_size": 63488 00:11:48.811 } 00:11:48.811 ] 00:11:48.811 } 00:11:48.811 } 00:11:48.811 }' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.811 pt2 00:11:48.811 pt3' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.811 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:49.070 [2024-09-27 22:28:44.723581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 30494354-fe87-4936-b19c-9d5a6ba36661 '!=' 30494354-fe87-4936-b19c-9d5a6ba36661 ']' 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67400 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 67400 ']' 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 67400 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67400 00:11:49.070 killing process with pid 67400 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67400' 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 67400 00:11:49.070 [2024-09-27 22:28:44.808248] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.070 [2024-09-27 22:28:44.808353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.070 22:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 67400 00:11:49.070 [2024-09-27 22:28:44.808419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.070 [2024-09-27 22:28:44.808437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.328 [2024-09-27 22:28:45.144274] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.863 22:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.863 00:11:51.863 real 0m6.536s 00:11:51.863 user 0m8.706s 00:11:51.863 sys 0m1.136s 00:11:51.863 ************************************ 00:11:51.863 END TEST raid_superblock_test 00:11:51.863 ************************************ 00:11:51.863 22:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.863 22:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.863 22:28:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:51.863 22:28:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:51.863 22:28:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.863 22:28:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.863 ************************************ 00:11:51.863 START TEST raid_read_error_test 00:11:51.863 ************************************ 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y1WsIeps8C 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67664 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67664 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67664 ']' 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.863 22:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.864 [2024-09-27 22:28:47.463231] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:51.864 [2024-09-27 22:28:47.463424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67664 ] 00:11:51.864 [2024-09-27 22:28:47.643230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.122 [2024-09-27 22:28:47.889027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.380 [2024-09-27 22:28:48.130690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.380 [2024-09-27 22:28:48.130732] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.945 BaseBdev1_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.945 true 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.945 [2024-09-27 22:28:48.691440] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.945 [2024-09-27 22:28:48.691502] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.945 [2024-09-27 22:28:48.691524] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.945 [2024-09-27 22:28:48.691540] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.945 [2024-09-27 22:28:48.693965] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.945 [2024-09-27 22:28:48.694020] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.945 BaseBdev1 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.945 BaseBdev2_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.945 true 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.945 [2024-09-27 22:28:48.766330] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:52.945 [2024-09-27 22:28:48.766511] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.945 [2024-09-27 22:28:48.766566] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:52.945 [2024-09-27 22:28:48.766645] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.945 [2024-09-27 22:28:48.769169] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.945 [2024-09-27 22:28:48.769313] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.945 BaseBdev2 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:52.945 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.946 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.204 BaseBdev3_malloc 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.204 true 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.204 [2024-09-27 22:28:48.842440] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.204 [2024-09-27 22:28:48.842632] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.204 [2024-09-27 22:28:48.842688] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.204 [2024-09-27 22:28:48.842825] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.204 [2024-09-27 22:28:48.845400] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.204 [2024-09-27 22:28:48.845560] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.204 BaseBdev3 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.204 [2024-09-27 22:28:48.854498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.204 [2024-09-27 22:28:48.856842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.204 [2024-09-27 22:28:48.857070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.204 [2024-09-27 22:28:48.857405] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:53.204 [2024-09-27 22:28:48.857497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:53.204 [2024-09-27 22:28:48.857894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:53.204 [2024-09-27 22:28:48.858151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:53.204 [2024-09-27 22:28:48.858284] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:53.204 [2024-09-27 22:28:48.858604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.204 "name": "raid_bdev1", 00:11:53.204 "uuid": "278c890e-9f86-4ed3-96f6-c5d9ef25e8a4", 00:11:53.204 "strip_size_kb": 64, 00:11:53.204 "state": "online", 00:11:53.204 "raid_level": "concat", 00:11:53.204 "superblock": true, 00:11:53.204 "num_base_bdevs": 3, 00:11:53.204 "num_base_bdevs_discovered": 3, 00:11:53.204 "num_base_bdevs_operational": 3, 00:11:53.204 "base_bdevs_list": [ 00:11:53.204 { 00:11:53.204 "name": "BaseBdev1", 00:11:53.204 "uuid": "37f2b170-43b7-5310-bb42-565b34eff8e4", 00:11:53.204 "is_configured": true, 00:11:53.204 "data_offset": 2048, 00:11:53.204 "data_size": 63488 00:11:53.204 }, 00:11:53.204 { 00:11:53.204 "name": "BaseBdev2", 00:11:53.204 "uuid": "52a5052a-3e2b-5ae4-9162-236a95f47e85", 00:11:53.204 "is_configured": true, 00:11:53.204 "data_offset": 2048, 00:11:53.204 "data_size": 63488 00:11:53.204 }, 00:11:53.204 { 00:11:53.204 "name": "BaseBdev3", 00:11:53.204 "uuid": "badfb451-3a30-53d3-906b-5ed00d03a06d", 00:11:53.204 "is_configured": true, 00:11:53.204 "data_offset": 2048, 00:11:53.204 "data_size": 63488 00:11:53.204 } 00:11:53.204 ] 00:11:53.204 }' 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.204 22:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.464 22:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:53.464 22:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.723 [2024-09-27 22:28:49.407389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.659 "name": "raid_bdev1", 00:11:54.659 "uuid": "278c890e-9f86-4ed3-96f6-c5d9ef25e8a4", 00:11:54.659 "strip_size_kb": 64, 00:11:54.659 "state": "online", 00:11:54.659 "raid_level": "concat", 00:11:54.659 "superblock": true, 00:11:54.659 "num_base_bdevs": 3, 00:11:54.659 "num_base_bdevs_discovered": 3, 00:11:54.659 "num_base_bdevs_operational": 3, 00:11:54.659 "base_bdevs_list": [ 00:11:54.659 { 00:11:54.659 "name": "BaseBdev1", 00:11:54.659 "uuid": "37f2b170-43b7-5310-bb42-565b34eff8e4", 00:11:54.659 "is_configured": true, 00:11:54.659 "data_offset": 2048, 00:11:54.659 "data_size": 63488 00:11:54.659 }, 00:11:54.659 { 00:11:54.659 "name": "BaseBdev2", 00:11:54.659 "uuid": "52a5052a-3e2b-5ae4-9162-236a95f47e85", 00:11:54.659 "is_configured": true, 00:11:54.659 "data_offset": 2048, 00:11:54.659 "data_size": 63488 00:11:54.659 }, 00:11:54.659 { 00:11:54.659 "name": "BaseBdev3", 00:11:54.659 "uuid": "badfb451-3a30-53d3-906b-5ed00d03a06d", 00:11:54.659 "is_configured": true, 00:11:54.659 "data_offset": 2048, 00:11:54.659 "data_size": 63488 00:11:54.659 } 00:11:54.659 ] 00:11:54.659 }' 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.659 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.918 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.918 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 [2024-09-27 22:28:50.780726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.918 [2024-09-27 22:28:50.780767] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.918 [2024-09-27 22:28:50.783713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.918 [2024-09-27 22:28:50.783769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.918 [2024-09-27 22:28:50.783809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.918 [2024-09-27 22:28:50.783821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:54.918 { 00:11:54.918 "results": [ 00:11:54.918 { 00:11:54.918 "job": "raid_bdev1", 00:11:54.918 "core_mask": "0x1", 00:11:54.918 "workload": "randrw", 00:11:54.918 "percentage": 50, 00:11:54.918 "status": "finished", 00:11:54.918 "queue_depth": 1, 00:11:54.918 "io_size": 131072, 00:11:54.918 "runtime": 1.373088, 00:11:54.918 "iops": 14645.091938754107, 00:11:54.918 "mibps": 1830.6364923442634, 00:11:54.918 "io_failed": 1, 00:11:54.918 "io_timeout": 0, 00:11:54.918 "avg_latency_us": 94.11752917188396, 00:11:54.918 "min_latency_us": 27.964658634538154, 00:11:54.918 "max_latency_us": 1539.701204819277 00:11:54.918 } 00:11:54.918 ], 00:11:54.918 "core_count": 1 00:11:54.918 } 00:11:54.919 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.919 22:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67664 00:11:54.919 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67664 ']' 00:11:54.919 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67664 00:11:54.919 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67664 00:11:55.177 killing process with pid 67664 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67664' 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67664 00:11:55.177 [2024-09-27 22:28:50.836309] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.177 22:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67664 00:11:55.436 [2024-09-27 22:28:51.090466] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y1WsIeps8C 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:57.339 ************************************ 00:11:57.339 END TEST raid_read_error_test 00:11:57.339 ************************************ 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:57.339 00:11:57.339 real 0m5.851s 00:11:57.339 user 0m6.612s 00:11:57.339 sys 0m0.728s 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.339 22:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.598 22:28:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:57.598 22:28:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:57.598 22:28:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.598 22:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.598 ************************************ 00:11:57.598 START TEST raid_write_error_test 00:11:57.598 ************************************ 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z1jdENJWeW 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67821 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67821 00:11:57.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67821 ']' 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.598 22:28:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.598 [2024-09-27 22:28:53.377723] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:11:57.598 [2024-09-27 22:28:53.378060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67821 ] 00:11:57.856 [2024-09-27 22:28:53.547676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.115 [2024-09-27 22:28:53.783921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.374 [2024-09-27 22:28:54.023159] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.374 [2024-09-27 22:28:54.023204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 BaseBdev1_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 true 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 [2024-09-27 22:28:54.578646] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:58.942 [2024-09-27 22:28:54.578736] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.942 [2024-09-27 22:28:54.578759] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:58.942 [2024-09-27 22:28:54.578774] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.942 [2024-09-27 22:28:54.581299] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.942 [2024-09-27 22:28:54.581517] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.942 BaseBdev1 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 BaseBdev2_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 true 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 [2024-09-27 22:28:54.653285] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:58.942 [2024-09-27 22:28:54.653361] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.942 [2024-09-27 22:28:54.653383] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:58.942 [2024-09-27 22:28:54.653397] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.942 [2024-09-27 22:28:54.655908] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.942 [2024-09-27 22:28:54.655965] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.942 BaseBdev2 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 BaseBdev3_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 true 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 [2024-09-27 22:28:54.728024] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:58.942 [2024-09-27 22:28:54.728093] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.942 [2024-09-27 22:28:54.728116] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:58.942 [2024-09-27 22:28:54.728131] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.942 [2024-09-27 22:28:54.730587] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.942 [2024-09-27 22:28:54.730635] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:58.942 BaseBdev3 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 [2024-09-27 22:28:54.740098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.942 [2024-09-27 22:28:54.742473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.942 [2024-09-27 22:28:54.742671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.942 [2024-09-27 22:28:54.743042] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:58.942 [2024-09-27 22:28:54.743154] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:58.942 [2024-09-27 22:28:54.743531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:58.942 [2024-09-27 22:28:54.743800] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:58.942 [2024-09-27 22:28:54.743891] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:58.942 [2024-09-27 22:28:54.744297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.942 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.942 "name": "raid_bdev1", 00:11:58.942 "uuid": "6c21639f-5c0c-434d-af57-9819d75d83fd", 00:11:58.943 "strip_size_kb": 64, 00:11:58.943 "state": "online", 00:11:58.943 "raid_level": "concat", 00:11:58.943 "superblock": true, 00:11:58.943 "num_base_bdevs": 3, 00:11:58.943 "num_base_bdevs_discovered": 3, 00:11:58.943 "num_base_bdevs_operational": 3, 00:11:58.943 "base_bdevs_list": [ 00:11:58.943 { 00:11:58.943 "name": "BaseBdev1", 00:11:58.943 "uuid": "3eac1fac-8be0-55dc-83ce-98957424151f", 00:11:58.943 "is_configured": true, 00:11:58.943 "data_offset": 2048, 00:11:58.943 "data_size": 63488 00:11:58.943 }, 00:11:58.943 { 00:11:58.943 "name": "BaseBdev2", 00:11:58.943 "uuid": "7d67c339-9fe9-577b-b01d-aeefe29bb930", 00:11:58.943 "is_configured": true, 00:11:58.943 "data_offset": 2048, 00:11:58.943 "data_size": 63488 00:11:58.943 }, 00:11:58.943 { 00:11:58.943 "name": "BaseBdev3", 00:11:58.943 "uuid": "ce2a35a5-552b-5d10-8730-be504c4b5294", 00:11:58.943 "is_configured": true, 00:11:58.943 "data_offset": 2048, 00:11:58.943 "data_size": 63488 00:11:58.943 } 00:11:58.943 ] 00:11:58.943 }' 00:11:58.943 22:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.943 22:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.540 22:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:59.540 22:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:59.540 [2024-09-27 22:28:55.285203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.476 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.476 "name": "raid_bdev1", 00:12:00.476 "uuid": "6c21639f-5c0c-434d-af57-9819d75d83fd", 00:12:00.476 "strip_size_kb": 64, 00:12:00.476 "state": "online", 00:12:00.476 "raid_level": "concat", 00:12:00.476 "superblock": true, 00:12:00.476 "num_base_bdevs": 3, 00:12:00.476 "num_base_bdevs_discovered": 3, 00:12:00.476 "num_base_bdevs_operational": 3, 00:12:00.477 "base_bdevs_list": [ 00:12:00.477 { 00:12:00.477 "name": "BaseBdev1", 00:12:00.477 "uuid": "3eac1fac-8be0-55dc-83ce-98957424151f", 00:12:00.477 "is_configured": true, 00:12:00.477 "data_offset": 2048, 00:12:00.477 "data_size": 63488 00:12:00.477 }, 00:12:00.477 { 00:12:00.477 "name": "BaseBdev2", 00:12:00.477 "uuid": "7d67c339-9fe9-577b-b01d-aeefe29bb930", 00:12:00.477 "is_configured": true, 00:12:00.477 "data_offset": 2048, 00:12:00.477 "data_size": 63488 00:12:00.477 }, 00:12:00.477 { 00:12:00.477 "name": "BaseBdev3", 00:12:00.477 "uuid": "ce2a35a5-552b-5d10-8730-be504c4b5294", 00:12:00.477 "is_configured": true, 00:12:00.477 "data_offset": 2048, 00:12:00.477 "data_size": 63488 00:12:00.477 } 00:12:00.477 ] 00:12:00.477 }' 00:12:00.477 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.477 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.044 [2024-09-27 22:28:56.641930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.044 [2024-09-27 22:28:56.642148] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.044 [2024-09-27 22:28:56.645128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.044 [2024-09-27 22:28:56.645326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.044 [2024-09-27 22:28:56.645415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.044 [2024-09-27 22:28:56.645523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:01.044 { 00:12:01.044 "results": [ 00:12:01.044 { 00:12:01.044 "job": "raid_bdev1", 00:12:01.044 "core_mask": "0x1", 00:12:01.044 "workload": "randrw", 00:12:01.044 "percentage": 50, 00:12:01.044 "status": "finished", 00:12:01.044 "queue_depth": 1, 00:12:01.044 "io_size": 131072, 00:12:01.044 "runtime": 1.357177, 00:12:01.044 "iops": 14451.320645722702, 00:12:01.044 "mibps": 1806.4150807153378, 00:12:01.044 "io_failed": 1, 00:12:01.044 "io_timeout": 0, 00:12:01.044 "avg_latency_us": 95.49062201697582, 00:12:01.044 "min_latency_us": 28.58152610441767, 00:12:01.044 "max_latency_us": 1513.3815261044176 00:12:01.044 } 00:12:01.044 ], 00:12:01.044 "core_count": 1 00:12:01.044 } 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67821 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67821 ']' 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67821 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67821 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.044 killing process with pid 67821 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67821' 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67821 00:12:01.044 [2024-09-27 22:28:56.690039] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.044 22:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67821 00:12:01.303 [2024-09-27 22:28:56.943214] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z1jdENJWeW 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:03.254 22:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:12:03.255 00:12:03.255 real 0m5.834s 00:12:03.255 user 0m6.576s 00:12:03.255 sys 0m0.698s 00:12:03.255 22:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.255 22:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.255 ************************************ 00:12:03.255 END TEST raid_write_error_test 00:12:03.255 ************************************ 00:12:03.514 22:28:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:03.514 22:28:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:03.514 22:28:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:03.515 22:28:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.515 22:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.515 ************************************ 00:12:03.515 START TEST raid_state_function_test 00:12:03.515 ************************************ 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:03.515 Process raid pid: 67976 00:12:03.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67976 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67976' 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67976 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67976 ']' 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:03.515 22:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.515 [2024-09-27 22:28:59.276855] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:12:03.515 [2024-09-27 22:28:59.277287] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.774 [2024-09-27 22:28:59.452298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.034 [2024-09-27 22:28:59.686856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.293 [2024-09-27 22:28:59.932296] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.293 [2024-09-27 22:28:59.932527] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.552 [2024-09-27 22:29:00.421477] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.552 [2024-09-27 22:29:00.421665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.552 [2024-09-27 22:29:00.421688] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.552 [2024-09-27 22:29:00.421703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.552 [2024-09-27 22:29:00.421711] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:04.552 [2024-09-27 22:29:00.421726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.552 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.812 "name": "Existed_Raid", 00:12:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.812 "strip_size_kb": 0, 00:12:04.812 "state": "configuring", 00:12:04.812 "raid_level": "raid1", 00:12:04.812 "superblock": false, 00:12:04.812 "num_base_bdevs": 3, 00:12:04.812 "num_base_bdevs_discovered": 0, 00:12:04.812 "num_base_bdevs_operational": 3, 00:12:04.812 "base_bdevs_list": [ 00:12:04.812 { 00:12:04.812 "name": "BaseBdev1", 00:12:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.812 "is_configured": false, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 0 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "name": "BaseBdev2", 00:12:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.812 "is_configured": false, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 0 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "name": "BaseBdev3", 00:12:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.812 "is_configured": false, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 0 00:12:04.812 } 00:12:04.812 ] 00:12:04.812 }' 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.812 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 [2024-09-27 22:29:00.832831] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.072 [2024-09-27 22:29:00.832877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 [2024-09-27 22:29:00.844825] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.072 [2024-09-27 22:29:00.844884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.072 [2024-09-27 22:29:00.844894] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.072 [2024-09-27 22:29:00.844907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.072 [2024-09-27 22:29:00.844915] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.072 [2024-09-27 22:29:00.844928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.072 [2024-09-27 22:29:00.896117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.072 BaseBdev1 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.072 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.073 [ 00:12:05.073 { 00:12:05.073 "name": "BaseBdev1", 00:12:05.073 "aliases": [ 00:12:05.073 "03c9de11-be4e-4495-b6a5-be3240678bd0" 00:12:05.073 ], 00:12:05.073 "product_name": "Malloc disk", 00:12:05.073 "block_size": 512, 00:12:05.073 "num_blocks": 65536, 00:12:05.073 "uuid": "03c9de11-be4e-4495-b6a5-be3240678bd0", 00:12:05.073 "assigned_rate_limits": { 00:12:05.073 "rw_ios_per_sec": 0, 00:12:05.073 "rw_mbytes_per_sec": 0, 00:12:05.073 "r_mbytes_per_sec": 0, 00:12:05.073 "w_mbytes_per_sec": 0 00:12:05.073 }, 00:12:05.073 "claimed": true, 00:12:05.073 "claim_type": "exclusive_write", 00:12:05.073 "zoned": false, 00:12:05.073 "supported_io_types": { 00:12:05.073 "read": true, 00:12:05.073 "write": true, 00:12:05.073 "unmap": true, 00:12:05.073 "flush": true, 00:12:05.073 "reset": true, 00:12:05.073 "nvme_admin": false, 00:12:05.073 "nvme_io": false, 00:12:05.073 "nvme_io_md": false, 00:12:05.073 "write_zeroes": true, 00:12:05.073 "zcopy": true, 00:12:05.073 "get_zone_info": false, 00:12:05.073 "zone_management": false, 00:12:05.073 "zone_append": false, 00:12:05.073 "compare": false, 00:12:05.073 "compare_and_write": false, 00:12:05.073 "abort": true, 00:12:05.073 "seek_hole": false, 00:12:05.073 "seek_data": false, 00:12:05.073 "copy": true, 00:12:05.073 "nvme_iov_md": false 00:12:05.073 }, 00:12:05.073 "memory_domains": [ 00:12:05.073 { 00:12:05.073 "dma_device_id": "system", 00:12:05.073 "dma_device_type": 1 00:12:05.073 }, 00:12:05.073 { 00:12:05.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.073 "dma_device_type": 2 00:12:05.073 } 00:12:05.073 ], 00:12:05.073 "driver_specific": {} 00:12:05.073 } 00:12:05.073 ] 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.073 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.332 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.332 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.332 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.332 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.332 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.332 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.332 "name": "Existed_Raid", 00:12:05.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.332 "strip_size_kb": 0, 00:12:05.332 "state": "configuring", 00:12:05.332 "raid_level": "raid1", 00:12:05.332 "superblock": false, 00:12:05.332 "num_base_bdevs": 3, 00:12:05.332 "num_base_bdevs_discovered": 1, 00:12:05.332 "num_base_bdevs_operational": 3, 00:12:05.332 "base_bdevs_list": [ 00:12:05.332 { 00:12:05.332 "name": "BaseBdev1", 00:12:05.332 "uuid": "03c9de11-be4e-4495-b6a5-be3240678bd0", 00:12:05.332 "is_configured": true, 00:12:05.333 "data_offset": 0, 00:12:05.333 "data_size": 65536 00:12:05.333 }, 00:12:05.333 { 00:12:05.333 "name": "BaseBdev2", 00:12:05.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.333 "is_configured": false, 00:12:05.333 "data_offset": 0, 00:12:05.333 "data_size": 0 00:12:05.333 }, 00:12:05.333 { 00:12:05.333 "name": "BaseBdev3", 00:12:05.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.333 "is_configured": false, 00:12:05.333 "data_offset": 0, 00:12:05.333 "data_size": 0 00:12:05.333 } 00:12:05.333 ] 00:12:05.333 }' 00:12:05.333 22:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.333 22:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.592 [2024-09-27 22:29:01.375492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.592 [2024-09-27 22:29:01.375689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.592 [2024-09-27 22:29:01.383505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.592 [2024-09-27 22:29:01.385566] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.592 [2024-09-27 22:29:01.385617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.592 [2024-09-27 22:29:01.385628] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.592 [2024-09-27 22:29:01.385641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.592 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.592 "name": "Existed_Raid", 00:12:05.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.592 "strip_size_kb": 0, 00:12:05.592 "state": "configuring", 00:12:05.592 "raid_level": "raid1", 00:12:05.592 "superblock": false, 00:12:05.592 "num_base_bdevs": 3, 00:12:05.592 "num_base_bdevs_discovered": 1, 00:12:05.592 "num_base_bdevs_operational": 3, 00:12:05.592 "base_bdevs_list": [ 00:12:05.592 { 00:12:05.592 "name": "BaseBdev1", 00:12:05.592 "uuid": "03c9de11-be4e-4495-b6a5-be3240678bd0", 00:12:05.592 "is_configured": true, 00:12:05.592 "data_offset": 0, 00:12:05.592 "data_size": 65536 00:12:05.592 }, 00:12:05.592 { 00:12:05.592 "name": "BaseBdev2", 00:12:05.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.592 "is_configured": false, 00:12:05.592 "data_offset": 0, 00:12:05.592 "data_size": 0 00:12:05.592 }, 00:12:05.592 { 00:12:05.592 "name": "BaseBdev3", 00:12:05.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.592 "is_configured": false, 00:12:05.592 "data_offset": 0, 00:12:05.592 "data_size": 0 00:12:05.592 } 00:12:05.592 ] 00:12:05.593 }' 00:12:05.593 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.593 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.159 [2024-09-27 22:29:01.833923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.159 BaseBdev2 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.159 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.159 [ 00:12:06.159 { 00:12:06.159 "name": "BaseBdev2", 00:12:06.159 "aliases": [ 00:12:06.159 "e3dbf556-ac84-474b-9cf5-c7bb81ee78c8" 00:12:06.160 ], 00:12:06.160 "product_name": "Malloc disk", 00:12:06.160 "block_size": 512, 00:12:06.160 "num_blocks": 65536, 00:12:06.160 "uuid": "e3dbf556-ac84-474b-9cf5-c7bb81ee78c8", 00:12:06.160 "assigned_rate_limits": { 00:12:06.160 "rw_ios_per_sec": 0, 00:12:06.160 "rw_mbytes_per_sec": 0, 00:12:06.160 "r_mbytes_per_sec": 0, 00:12:06.160 "w_mbytes_per_sec": 0 00:12:06.160 }, 00:12:06.160 "claimed": true, 00:12:06.160 "claim_type": "exclusive_write", 00:12:06.160 "zoned": false, 00:12:06.160 "supported_io_types": { 00:12:06.160 "read": true, 00:12:06.160 "write": true, 00:12:06.160 "unmap": true, 00:12:06.160 "flush": true, 00:12:06.160 "reset": true, 00:12:06.160 "nvme_admin": false, 00:12:06.160 "nvme_io": false, 00:12:06.160 "nvme_io_md": false, 00:12:06.160 "write_zeroes": true, 00:12:06.160 "zcopy": true, 00:12:06.160 "get_zone_info": false, 00:12:06.160 "zone_management": false, 00:12:06.160 "zone_append": false, 00:12:06.160 "compare": false, 00:12:06.160 "compare_and_write": false, 00:12:06.160 "abort": true, 00:12:06.160 "seek_hole": false, 00:12:06.160 "seek_data": false, 00:12:06.160 "copy": true, 00:12:06.160 "nvme_iov_md": false 00:12:06.160 }, 00:12:06.160 "memory_domains": [ 00:12:06.160 { 00:12:06.160 "dma_device_id": "system", 00:12:06.160 "dma_device_type": 1 00:12:06.160 }, 00:12:06.160 { 00:12:06.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.160 "dma_device_type": 2 00:12:06.160 } 00:12:06.160 ], 00:12:06.160 "driver_specific": {} 00:12:06.160 } 00:12:06.160 ] 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.160 "name": "Existed_Raid", 00:12:06.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.160 "strip_size_kb": 0, 00:12:06.160 "state": "configuring", 00:12:06.160 "raid_level": "raid1", 00:12:06.160 "superblock": false, 00:12:06.160 "num_base_bdevs": 3, 00:12:06.160 "num_base_bdevs_discovered": 2, 00:12:06.160 "num_base_bdevs_operational": 3, 00:12:06.160 "base_bdevs_list": [ 00:12:06.160 { 00:12:06.160 "name": "BaseBdev1", 00:12:06.160 "uuid": "03c9de11-be4e-4495-b6a5-be3240678bd0", 00:12:06.160 "is_configured": true, 00:12:06.160 "data_offset": 0, 00:12:06.160 "data_size": 65536 00:12:06.160 }, 00:12:06.160 { 00:12:06.160 "name": "BaseBdev2", 00:12:06.160 "uuid": "e3dbf556-ac84-474b-9cf5-c7bb81ee78c8", 00:12:06.160 "is_configured": true, 00:12:06.160 "data_offset": 0, 00:12:06.160 "data_size": 65536 00:12:06.160 }, 00:12:06.160 { 00:12:06.160 "name": "BaseBdev3", 00:12:06.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.160 "is_configured": false, 00:12:06.160 "data_offset": 0, 00:12:06.160 "data_size": 0 00:12:06.160 } 00:12:06.160 ] 00:12:06.160 }' 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.160 22:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.726 [2024-09-27 22:29:02.342618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.726 [2024-09-27 22:29:02.342671] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:06.726 [2024-09-27 22:29:02.342692] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:06.726 [2024-09-27 22:29:02.343009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:06.726 [2024-09-27 22:29:02.343189] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:06.726 [2024-09-27 22:29:02.343200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:06.726 [2024-09-27 22:29:02.343473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.726 BaseBdev3 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.726 [ 00:12:06.726 { 00:12:06.726 "name": "BaseBdev3", 00:12:06.726 "aliases": [ 00:12:06.726 "67bd8566-bcd7-42fa-96ee-af29ba6cf63d" 00:12:06.726 ], 00:12:06.726 "product_name": "Malloc disk", 00:12:06.726 "block_size": 512, 00:12:06.726 "num_blocks": 65536, 00:12:06.726 "uuid": "67bd8566-bcd7-42fa-96ee-af29ba6cf63d", 00:12:06.726 "assigned_rate_limits": { 00:12:06.726 "rw_ios_per_sec": 0, 00:12:06.726 "rw_mbytes_per_sec": 0, 00:12:06.726 "r_mbytes_per_sec": 0, 00:12:06.726 "w_mbytes_per_sec": 0 00:12:06.726 }, 00:12:06.726 "claimed": true, 00:12:06.726 "claim_type": "exclusive_write", 00:12:06.726 "zoned": false, 00:12:06.726 "supported_io_types": { 00:12:06.726 "read": true, 00:12:06.726 "write": true, 00:12:06.726 "unmap": true, 00:12:06.726 "flush": true, 00:12:06.726 "reset": true, 00:12:06.726 "nvme_admin": false, 00:12:06.726 "nvme_io": false, 00:12:06.726 "nvme_io_md": false, 00:12:06.726 "write_zeroes": true, 00:12:06.726 "zcopy": true, 00:12:06.726 "get_zone_info": false, 00:12:06.726 "zone_management": false, 00:12:06.726 "zone_append": false, 00:12:06.726 "compare": false, 00:12:06.726 "compare_and_write": false, 00:12:06.726 "abort": true, 00:12:06.726 "seek_hole": false, 00:12:06.726 "seek_data": false, 00:12:06.726 "copy": true, 00:12:06.726 "nvme_iov_md": false 00:12:06.726 }, 00:12:06.726 "memory_domains": [ 00:12:06.726 { 00:12:06.726 "dma_device_id": "system", 00:12:06.726 "dma_device_type": 1 00:12:06.726 }, 00:12:06.726 { 00:12:06.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.726 "dma_device_type": 2 00:12:06.726 } 00:12:06.726 ], 00:12:06.726 "driver_specific": {} 00:12:06.726 } 00:12:06.726 ] 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.726 "name": "Existed_Raid", 00:12:06.726 "uuid": "d6c3164a-9b92-49f2-9cd9-7e3d8e0a1d77", 00:12:06.726 "strip_size_kb": 0, 00:12:06.726 "state": "online", 00:12:06.726 "raid_level": "raid1", 00:12:06.726 "superblock": false, 00:12:06.726 "num_base_bdevs": 3, 00:12:06.726 "num_base_bdevs_discovered": 3, 00:12:06.726 "num_base_bdevs_operational": 3, 00:12:06.726 "base_bdevs_list": [ 00:12:06.726 { 00:12:06.726 "name": "BaseBdev1", 00:12:06.726 "uuid": "03c9de11-be4e-4495-b6a5-be3240678bd0", 00:12:06.726 "is_configured": true, 00:12:06.726 "data_offset": 0, 00:12:06.726 "data_size": 65536 00:12:06.726 }, 00:12:06.726 { 00:12:06.726 "name": "BaseBdev2", 00:12:06.726 "uuid": "e3dbf556-ac84-474b-9cf5-c7bb81ee78c8", 00:12:06.726 "is_configured": true, 00:12:06.726 "data_offset": 0, 00:12:06.726 "data_size": 65536 00:12:06.726 }, 00:12:06.726 { 00:12:06.726 "name": "BaseBdev3", 00:12:06.726 "uuid": "67bd8566-bcd7-42fa-96ee-af29ba6cf63d", 00:12:06.726 "is_configured": true, 00:12:06.726 "data_offset": 0, 00:12:06.726 "data_size": 65536 00:12:06.726 } 00:12:06.726 ] 00:12:06.726 }' 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.726 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.985 [2024-09-27 22:29:02.818386] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.985 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.985 "name": "Existed_Raid", 00:12:06.985 "aliases": [ 00:12:06.985 "d6c3164a-9b92-49f2-9cd9-7e3d8e0a1d77" 00:12:06.985 ], 00:12:06.985 "product_name": "Raid Volume", 00:12:06.985 "block_size": 512, 00:12:06.985 "num_blocks": 65536, 00:12:06.985 "uuid": "d6c3164a-9b92-49f2-9cd9-7e3d8e0a1d77", 00:12:06.985 "assigned_rate_limits": { 00:12:06.985 "rw_ios_per_sec": 0, 00:12:06.985 "rw_mbytes_per_sec": 0, 00:12:06.985 "r_mbytes_per_sec": 0, 00:12:06.985 "w_mbytes_per_sec": 0 00:12:06.985 }, 00:12:06.985 "claimed": false, 00:12:06.985 "zoned": false, 00:12:06.985 "supported_io_types": { 00:12:06.985 "read": true, 00:12:06.985 "write": true, 00:12:06.985 "unmap": false, 00:12:06.985 "flush": false, 00:12:06.985 "reset": true, 00:12:06.985 "nvme_admin": false, 00:12:06.985 "nvme_io": false, 00:12:06.985 "nvme_io_md": false, 00:12:06.985 "write_zeroes": true, 00:12:06.985 "zcopy": false, 00:12:06.985 "get_zone_info": false, 00:12:06.985 "zone_management": false, 00:12:06.985 "zone_append": false, 00:12:06.985 "compare": false, 00:12:06.985 "compare_and_write": false, 00:12:06.985 "abort": false, 00:12:06.985 "seek_hole": false, 00:12:06.985 "seek_data": false, 00:12:06.985 "copy": false, 00:12:06.985 "nvme_iov_md": false 00:12:06.985 }, 00:12:06.985 "memory_domains": [ 00:12:06.985 { 00:12:06.985 "dma_device_id": "system", 00:12:06.985 "dma_device_type": 1 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.985 "dma_device_type": 2 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "dma_device_id": "system", 00:12:06.985 "dma_device_type": 1 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.985 "dma_device_type": 2 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "dma_device_id": "system", 00:12:06.985 "dma_device_type": 1 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.985 "dma_device_type": 2 00:12:06.985 } 00:12:06.985 ], 00:12:06.985 "driver_specific": { 00:12:06.985 "raid": { 00:12:06.985 "uuid": "d6c3164a-9b92-49f2-9cd9-7e3d8e0a1d77", 00:12:06.985 "strip_size_kb": 0, 00:12:06.985 "state": "online", 00:12:06.985 "raid_level": "raid1", 00:12:06.985 "superblock": false, 00:12:06.985 "num_base_bdevs": 3, 00:12:06.985 "num_base_bdevs_discovered": 3, 00:12:06.985 "num_base_bdevs_operational": 3, 00:12:06.985 "base_bdevs_list": [ 00:12:06.985 { 00:12:06.985 "name": "BaseBdev1", 00:12:06.985 "uuid": "03c9de11-be4e-4495-b6a5-be3240678bd0", 00:12:06.985 "is_configured": true, 00:12:06.985 "data_offset": 0, 00:12:06.985 "data_size": 65536 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "name": "BaseBdev2", 00:12:06.985 "uuid": "e3dbf556-ac84-474b-9cf5-c7bb81ee78c8", 00:12:06.985 "is_configured": true, 00:12:06.985 "data_offset": 0, 00:12:06.985 "data_size": 65536 00:12:06.985 }, 00:12:06.985 { 00:12:06.985 "name": "BaseBdev3", 00:12:06.985 "uuid": "67bd8566-bcd7-42fa-96ee-af29ba6cf63d", 00:12:06.985 "is_configured": true, 00:12:06.985 "data_offset": 0, 00:12:06.986 "data_size": 65536 00:12:06.986 } 00:12:06.986 ] 00:12:06.986 } 00:12:06.986 } 00:12:06.986 }' 00:12:06.986 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:07.245 BaseBdev2 00:12:07.245 BaseBdev3' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.245 22:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.245 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.245 [2024-09-27 22:29:03.097683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.504 "name": "Existed_Raid", 00:12:07.504 "uuid": "d6c3164a-9b92-49f2-9cd9-7e3d8e0a1d77", 00:12:07.504 "strip_size_kb": 0, 00:12:07.504 "state": "online", 00:12:07.504 "raid_level": "raid1", 00:12:07.504 "superblock": false, 00:12:07.504 "num_base_bdevs": 3, 00:12:07.504 "num_base_bdevs_discovered": 2, 00:12:07.504 "num_base_bdevs_operational": 2, 00:12:07.504 "base_bdevs_list": [ 00:12:07.504 { 00:12:07.504 "name": null, 00:12:07.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.504 "is_configured": false, 00:12:07.504 "data_offset": 0, 00:12:07.504 "data_size": 65536 00:12:07.504 }, 00:12:07.504 { 00:12:07.504 "name": "BaseBdev2", 00:12:07.504 "uuid": "e3dbf556-ac84-474b-9cf5-c7bb81ee78c8", 00:12:07.504 "is_configured": true, 00:12:07.504 "data_offset": 0, 00:12:07.504 "data_size": 65536 00:12:07.504 }, 00:12:07.504 { 00:12:07.504 "name": "BaseBdev3", 00:12:07.504 "uuid": "67bd8566-bcd7-42fa-96ee-af29ba6cf63d", 00:12:07.504 "is_configured": true, 00:12:07.504 "data_offset": 0, 00:12:07.504 "data_size": 65536 00:12:07.504 } 00:12:07.504 ] 00:12:07.504 }' 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.504 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:07.763 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.022 [2024-09-27 22:29:03.661279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.022 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.022 [2024-09-27 22:29:03.813676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.022 [2024-09-27 22:29:03.813796] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.282 [2024-09-27 22:29:03.907746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.282 [2024-09-27 22:29:03.907812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.282 [2024-09-27 22:29:03.907827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.282 22:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.282 BaseBdev2 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.282 [ 00:12:08.282 { 00:12:08.282 "name": "BaseBdev2", 00:12:08.282 "aliases": [ 00:12:08.282 "eaab2b24-433b-4f3b-bb55-342af21177bd" 00:12:08.282 ], 00:12:08.282 "product_name": "Malloc disk", 00:12:08.282 "block_size": 512, 00:12:08.282 "num_blocks": 65536, 00:12:08.282 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:08.282 "assigned_rate_limits": { 00:12:08.282 "rw_ios_per_sec": 0, 00:12:08.282 "rw_mbytes_per_sec": 0, 00:12:08.282 "r_mbytes_per_sec": 0, 00:12:08.282 "w_mbytes_per_sec": 0 00:12:08.282 }, 00:12:08.282 "claimed": false, 00:12:08.282 "zoned": false, 00:12:08.282 "supported_io_types": { 00:12:08.282 "read": true, 00:12:08.282 "write": true, 00:12:08.282 "unmap": true, 00:12:08.282 "flush": true, 00:12:08.282 "reset": true, 00:12:08.282 "nvme_admin": false, 00:12:08.282 "nvme_io": false, 00:12:08.282 "nvme_io_md": false, 00:12:08.282 "write_zeroes": true, 00:12:08.282 "zcopy": true, 00:12:08.282 "get_zone_info": false, 00:12:08.282 "zone_management": false, 00:12:08.282 "zone_append": false, 00:12:08.282 "compare": false, 00:12:08.282 "compare_and_write": false, 00:12:08.282 "abort": true, 00:12:08.282 "seek_hole": false, 00:12:08.282 "seek_data": false, 00:12:08.282 "copy": true, 00:12:08.282 "nvme_iov_md": false 00:12:08.282 }, 00:12:08.282 "memory_domains": [ 00:12:08.282 { 00:12:08.282 "dma_device_id": "system", 00:12:08.282 "dma_device_type": 1 00:12:08.282 }, 00:12:08.282 { 00:12:08.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.282 "dma_device_type": 2 00:12:08.282 } 00:12:08.282 ], 00:12:08.282 "driver_specific": {} 00:12:08.282 } 00:12:08.282 ] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.282 BaseBdev3 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.282 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.283 [ 00:12:08.283 { 00:12:08.283 "name": "BaseBdev3", 00:12:08.283 "aliases": [ 00:12:08.283 "ba6c58d2-aaf7-4420-b672-124a505baab3" 00:12:08.283 ], 00:12:08.283 "product_name": "Malloc disk", 00:12:08.283 "block_size": 512, 00:12:08.283 "num_blocks": 65536, 00:12:08.283 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:08.283 "assigned_rate_limits": { 00:12:08.283 "rw_ios_per_sec": 0, 00:12:08.283 "rw_mbytes_per_sec": 0, 00:12:08.283 "r_mbytes_per_sec": 0, 00:12:08.283 "w_mbytes_per_sec": 0 00:12:08.283 }, 00:12:08.283 "claimed": false, 00:12:08.283 "zoned": false, 00:12:08.283 "supported_io_types": { 00:12:08.283 "read": true, 00:12:08.283 "write": true, 00:12:08.283 "unmap": true, 00:12:08.283 "flush": true, 00:12:08.283 "reset": true, 00:12:08.283 "nvme_admin": false, 00:12:08.283 "nvme_io": false, 00:12:08.283 "nvme_io_md": false, 00:12:08.283 "write_zeroes": true, 00:12:08.283 "zcopy": true, 00:12:08.283 "get_zone_info": false, 00:12:08.283 "zone_management": false, 00:12:08.283 "zone_append": false, 00:12:08.283 "compare": false, 00:12:08.283 "compare_and_write": false, 00:12:08.283 "abort": true, 00:12:08.283 "seek_hole": false, 00:12:08.283 "seek_data": false, 00:12:08.283 "copy": true, 00:12:08.283 "nvme_iov_md": false 00:12:08.283 }, 00:12:08.283 "memory_domains": [ 00:12:08.283 { 00:12:08.283 "dma_device_id": "system", 00:12:08.283 "dma_device_type": 1 00:12:08.283 }, 00:12:08.283 { 00:12:08.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.283 "dma_device_type": 2 00:12:08.283 } 00:12:08.283 ], 00:12:08.283 "driver_specific": {} 00:12:08.283 } 00:12:08.283 ] 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.283 [2024-09-27 22:29:04.135187] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.283 [2024-09-27 22:29:04.135255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.283 [2024-09-27 22:29:04.135281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.283 [2024-09-27 22:29:04.137488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.283 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.542 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.543 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.543 "name": "Existed_Raid", 00:12:08.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.543 "strip_size_kb": 0, 00:12:08.543 "state": "configuring", 00:12:08.543 "raid_level": "raid1", 00:12:08.543 "superblock": false, 00:12:08.543 "num_base_bdevs": 3, 00:12:08.543 "num_base_bdevs_discovered": 2, 00:12:08.543 "num_base_bdevs_operational": 3, 00:12:08.543 "base_bdevs_list": [ 00:12:08.543 { 00:12:08.543 "name": "BaseBdev1", 00:12:08.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.543 "is_configured": false, 00:12:08.543 "data_offset": 0, 00:12:08.543 "data_size": 0 00:12:08.543 }, 00:12:08.543 { 00:12:08.543 "name": "BaseBdev2", 00:12:08.543 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:08.543 "is_configured": true, 00:12:08.543 "data_offset": 0, 00:12:08.543 "data_size": 65536 00:12:08.543 }, 00:12:08.543 { 00:12:08.543 "name": "BaseBdev3", 00:12:08.543 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:08.543 "is_configured": true, 00:12:08.543 "data_offset": 0, 00:12:08.543 "data_size": 65536 00:12:08.543 } 00:12:08.543 ] 00:12:08.543 }' 00:12:08.543 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.543 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.802 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:08.802 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.802 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.802 [2024-09-27 22:29:04.546561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.802 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.802 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.802 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.803 "name": "Existed_Raid", 00:12:08.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.803 "strip_size_kb": 0, 00:12:08.803 "state": "configuring", 00:12:08.803 "raid_level": "raid1", 00:12:08.803 "superblock": false, 00:12:08.803 "num_base_bdevs": 3, 00:12:08.803 "num_base_bdevs_discovered": 1, 00:12:08.803 "num_base_bdevs_operational": 3, 00:12:08.803 "base_bdevs_list": [ 00:12:08.803 { 00:12:08.803 "name": "BaseBdev1", 00:12:08.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.803 "is_configured": false, 00:12:08.803 "data_offset": 0, 00:12:08.803 "data_size": 0 00:12:08.803 }, 00:12:08.803 { 00:12:08.803 "name": null, 00:12:08.803 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:08.803 "is_configured": false, 00:12:08.803 "data_offset": 0, 00:12:08.803 "data_size": 65536 00:12:08.803 }, 00:12:08.803 { 00:12:08.803 "name": "BaseBdev3", 00:12:08.803 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:08.803 "is_configured": true, 00:12:08.803 "data_offset": 0, 00:12:08.803 "data_size": 65536 00:12:08.803 } 00:12:08.803 ] 00:12:08.803 }' 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.803 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.371 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.371 22:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.371 22:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 [2024-09-27 22:29:05.058777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.371 BaseBdev1 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 [ 00:12:09.371 { 00:12:09.371 "name": "BaseBdev1", 00:12:09.371 "aliases": [ 00:12:09.371 "81913b52-d0c7-4494-834f-640d233b96d6" 00:12:09.371 ], 00:12:09.371 "product_name": "Malloc disk", 00:12:09.371 "block_size": 512, 00:12:09.371 "num_blocks": 65536, 00:12:09.371 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:09.371 "assigned_rate_limits": { 00:12:09.371 "rw_ios_per_sec": 0, 00:12:09.371 "rw_mbytes_per_sec": 0, 00:12:09.371 "r_mbytes_per_sec": 0, 00:12:09.371 "w_mbytes_per_sec": 0 00:12:09.371 }, 00:12:09.371 "claimed": true, 00:12:09.371 "claim_type": "exclusive_write", 00:12:09.371 "zoned": false, 00:12:09.371 "supported_io_types": { 00:12:09.371 "read": true, 00:12:09.371 "write": true, 00:12:09.371 "unmap": true, 00:12:09.371 "flush": true, 00:12:09.371 "reset": true, 00:12:09.371 "nvme_admin": false, 00:12:09.371 "nvme_io": false, 00:12:09.371 "nvme_io_md": false, 00:12:09.371 "write_zeroes": true, 00:12:09.371 "zcopy": true, 00:12:09.371 "get_zone_info": false, 00:12:09.371 "zone_management": false, 00:12:09.371 "zone_append": false, 00:12:09.371 "compare": false, 00:12:09.371 "compare_and_write": false, 00:12:09.371 "abort": true, 00:12:09.371 "seek_hole": false, 00:12:09.371 "seek_data": false, 00:12:09.371 "copy": true, 00:12:09.371 "nvme_iov_md": false 00:12:09.371 }, 00:12:09.371 "memory_domains": [ 00:12:09.371 { 00:12:09.371 "dma_device_id": "system", 00:12:09.371 "dma_device_type": 1 00:12:09.371 }, 00:12:09.371 { 00:12:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.371 "dma_device_type": 2 00:12:09.371 } 00:12:09.371 ], 00:12:09.371 "driver_specific": {} 00:12:09.371 } 00:12:09.371 ] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.371 "name": "Existed_Raid", 00:12:09.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.371 "strip_size_kb": 0, 00:12:09.371 "state": "configuring", 00:12:09.371 "raid_level": "raid1", 00:12:09.371 "superblock": false, 00:12:09.371 "num_base_bdevs": 3, 00:12:09.371 "num_base_bdevs_discovered": 2, 00:12:09.371 "num_base_bdevs_operational": 3, 00:12:09.371 "base_bdevs_list": [ 00:12:09.371 { 00:12:09.371 "name": "BaseBdev1", 00:12:09.371 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:09.371 "is_configured": true, 00:12:09.371 "data_offset": 0, 00:12:09.371 "data_size": 65536 00:12:09.371 }, 00:12:09.371 { 00:12:09.371 "name": null, 00:12:09.371 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:09.371 "is_configured": false, 00:12:09.371 "data_offset": 0, 00:12:09.371 "data_size": 65536 00:12:09.371 }, 00:12:09.371 { 00:12:09.371 "name": "BaseBdev3", 00:12:09.371 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:09.371 "is_configured": true, 00:12:09.371 "data_offset": 0, 00:12:09.371 "data_size": 65536 00:12:09.371 } 00:12:09.371 ] 00:12:09.371 }' 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.371 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.630 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.630 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.630 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.630 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 [2024-09-27 22:29:05.550207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.889 "name": "Existed_Raid", 00:12:09.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.889 "strip_size_kb": 0, 00:12:09.889 "state": "configuring", 00:12:09.889 "raid_level": "raid1", 00:12:09.889 "superblock": false, 00:12:09.889 "num_base_bdevs": 3, 00:12:09.889 "num_base_bdevs_discovered": 1, 00:12:09.889 "num_base_bdevs_operational": 3, 00:12:09.889 "base_bdevs_list": [ 00:12:09.889 { 00:12:09.889 "name": "BaseBdev1", 00:12:09.889 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:09.889 "is_configured": true, 00:12:09.889 "data_offset": 0, 00:12:09.889 "data_size": 65536 00:12:09.889 }, 00:12:09.889 { 00:12:09.889 "name": null, 00:12:09.889 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:09.889 "is_configured": false, 00:12:09.889 "data_offset": 0, 00:12:09.889 "data_size": 65536 00:12:09.889 }, 00:12:09.889 { 00:12:09.889 "name": null, 00:12:09.889 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:09.889 "is_configured": false, 00:12:09.889 "data_offset": 0, 00:12:09.889 "data_size": 65536 00:12:09.889 } 00:12:09.889 ] 00:12:09.889 }' 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.889 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.159 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.159 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.159 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.159 22:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.159 22:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.159 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:10.159 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:10.159 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.159 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.159 [2024-09-27 22:29:06.033586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.417 "name": "Existed_Raid", 00:12:10.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.417 "strip_size_kb": 0, 00:12:10.417 "state": "configuring", 00:12:10.417 "raid_level": "raid1", 00:12:10.417 "superblock": false, 00:12:10.417 "num_base_bdevs": 3, 00:12:10.417 "num_base_bdevs_discovered": 2, 00:12:10.417 "num_base_bdevs_operational": 3, 00:12:10.417 "base_bdevs_list": [ 00:12:10.417 { 00:12:10.417 "name": "BaseBdev1", 00:12:10.417 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:10.417 "is_configured": true, 00:12:10.417 "data_offset": 0, 00:12:10.417 "data_size": 65536 00:12:10.417 }, 00:12:10.417 { 00:12:10.417 "name": null, 00:12:10.417 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:10.417 "is_configured": false, 00:12:10.417 "data_offset": 0, 00:12:10.417 "data_size": 65536 00:12:10.417 }, 00:12:10.417 { 00:12:10.417 "name": "BaseBdev3", 00:12:10.417 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:10.417 "is_configured": true, 00:12:10.417 "data_offset": 0, 00:12:10.417 "data_size": 65536 00:12:10.417 } 00:12:10.417 ] 00:12:10.417 }' 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.417 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.676 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.676 [2024-09-27 22:29:06.548971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.935 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.936 "name": "Existed_Raid", 00:12:10.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.936 "strip_size_kb": 0, 00:12:10.936 "state": "configuring", 00:12:10.936 "raid_level": "raid1", 00:12:10.936 "superblock": false, 00:12:10.936 "num_base_bdevs": 3, 00:12:10.936 "num_base_bdevs_discovered": 1, 00:12:10.936 "num_base_bdevs_operational": 3, 00:12:10.936 "base_bdevs_list": [ 00:12:10.936 { 00:12:10.936 "name": null, 00:12:10.936 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:10.936 "is_configured": false, 00:12:10.936 "data_offset": 0, 00:12:10.936 "data_size": 65536 00:12:10.936 }, 00:12:10.936 { 00:12:10.936 "name": null, 00:12:10.936 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:10.936 "is_configured": false, 00:12:10.936 "data_offset": 0, 00:12:10.936 "data_size": 65536 00:12:10.936 }, 00:12:10.936 { 00:12:10.936 "name": "BaseBdev3", 00:12:10.936 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:10.936 "is_configured": true, 00:12:10.936 "data_offset": 0, 00:12:10.936 "data_size": 65536 00:12:10.936 } 00:12:10.936 ] 00:12:10.936 }' 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.936 22:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.503 [2024-09-27 22:29:07.126180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.503 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.503 "name": "Existed_Raid", 00:12:11.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.504 "strip_size_kb": 0, 00:12:11.504 "state": "configuring", 00:12:11.504 "raid_level": "raid1", 00:12:11.504 "superblock": false, 00:12:11.504 "num_base_bdevs": 3, 00:12:11.504 "num_base_bdevs_discovered": 2, 00:12:11.504 "num_base_bdevs_operational": 3, 00:12:11.504 "base_bdevs_list": [ 00:12:11.504 { 00:12:11.504 "name": null, 00:12:11.504 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:11.504 "is_configured": false, 00:12:11.504 "data_offset": 0, 00:12:11.504 "data_size": 65536 00:12:11.504 }, 00:12:11.504 { 00:12:11.504 "name": "BaseBdev2", 00:12:11.504 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:11.504 "is_configured": true, 00:12:11.504 "data_offset": 0, 00:12:11.504 "data_size": 65536 00:12:11.504 }, 00:12:11.504 { 00:12:11.504 "name": "BaseBdev3", 00:12:11.504 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:11.504 "is_configured": true, 00:12:11.504 "data_offset": 0, 00:12:11.504 "data_size": 65536 00:12:11.504 } 00:12:11.504 ] 00:12:11.504 }' 00:12:11.504 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.504 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81913b52-d0c7-4494-834f-640d233b96d6 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.762 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.027 [2024-09-27 22:29:07.679726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:12.027 [2024-09-27 22:29:07.679799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:12.027 [2024-09-27 22:29:07.679810] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:12.027 [2024-09-27 22:29:07.680162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:12.027 [2024-09-27 22:29:07.680345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:12.027 [2024-09-27 22:29:07.680360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:12.027 [2024-09-27 22:29:07.680645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.027 NewBaseBdev 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.027 [ 00:12:12.027 { 00:12:12.027 "name": "NewBaseBdev", 00:12:12.027 "aliases": [ 00:12:12.027 "81913b52-d0c7-4494-834f-640d233b96d6" 00:12:12.027 ], 00:12:12.027 "product_name": "Malloc disk", 00:12:12.027 "block_size": 512, 00:12:12.027 "num_blocks": 65536, 00:12:12.027 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:12.027 "assigned_rate_limits": { 00:12:12.027 "rw_ios_per_sec": 0, 00:12:12.027 "rw_mbytes_per_sec": 0, 00:12:12.027 "r_mbytes_per_sec": 0, 00:12:12.027 "w_mbytes_per_sec": 0 00:12:12.027 }, 00:12:12.027 "claimed": true, 00:12:12.027 "claim_type": "exclusive_write", 00:12:12.027 "zoned": false, 00:12:12.027 "supported_io_types": { 00:12:12.027 "read": true, 00:12:12.027 "write": true, 00:12:12.027 "unmap": true, 00:12:12.027 "flush": true, 00:12:12.027 "reset": true, 00:12:12.027 "nvme_admin": false, 00:12:12.027 "nvme_io": false, 00:12:12.027 "nvme_io_md": false, 00:12:12.027 "write_zeroes": true, 00:12:12.027 "zcopy": true, 00:12:12.027 "get_zone_info": false, 00:12:12.027 "zone_management": false, 00:12:12.027 "zone_append": false, 00:12:12.027 "compare": false, 00:12:12.027 "compare_and_write": false, 00:12:12.027 "abort": true, 00:12:12.027 "seek_hole": false, 00:12:12.027 "seek_data": false, 00:12:12.027 "copy": true, 00:12:12.027 "nvme_iov_md": false 00:12:12.027 }, 00:12:12.027 "memory_domains": [ 00:12:12.027 { 00:12:12.027 "dma_device_id": "system", 00:12:12.027 "dma_device_type": 1 00:12:12.027 }, 00:12:12.027 { 00:12:12.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.027 "dma_device_type": 2 00:12:12.027 } 00:12:12.027 ], 00:12:12.027 "driver_specific": {} 00:12:12.027 } 00:12:12.027 ] 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.027 "name": "Existed_Raid", 00:12:12.027 "uuid": "ad8d530a-960a-4246-8895-d5d818d08c2a", 00:12:12.027 "strip_size_kb": 0, 00:12:12.027 "state": "online", 00:12:12.027 "raid_level": "raid1", 00:12:12.027 "superblock": false, 00:12:12.027 "num_base_bdevs": 3, 00:12:12.027 "num_base_bdevs_discovered": 3, 00:12:12.027 "num_base_bdevs_operational": 3, 00:12:12.027 "base_bdevs_list": [ 00:12:12.027 { 00:12:12.027 "name": "NewBaseBdev", 00:12:12.027 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:12.027 "is_configured": true, 00:12:12.027 "data_offset": 0, 00:12:12.027 "data_size": 65536 00:12:12.027 }, 00:12:12.027 { 00:12:12.027 "name": "BaseBdev2", 00:12:12.027 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:12.027 "is_configured": true, 00:12:12.027 "data_offset": 0, 00:12:12.027 "data_size": 65536 00:12:12.027 }, 00:12:12.027 { 00:12:12.027 "name": "BaseBdev3", 00:12:12.027 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:12.027 "is_configured": true, 00:12:12.027 "data_offset": 0, 00:12:12.027 "data_size": 65536 00:12:12.027 } 00:12:12.027 ] 00:12:12.027 }' 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.027 22:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.596 [2024-09-27 22:29:08.191672] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.596 "name": "Existed_Raid", 00:12:12.596 "aliases": [ 00:12:12.596 "ad8d530a-960a-4246-8895-d5d818d08c2a" 00:12:12.596 ], 00:12:12.596 "product_name": "Raid Volume", 00:12:12.596 "block_size": 512, 00:12:12.596 "num_blocks": 65536, 00:12:12.596 "uuid": "ad8d530a-960a-4246-8895-d5d818d08c2a", 00:12:12.596 "assigned_rate_limits": { 00:12:12.596 "rw_ios_per_sec": 0, 00:12:12.596 "rw_mbytes_per_sec": 0, 00:12:12.596 "r_mbytes_per_sec": 0, 00:12:12.596 "w_mbytes_per_sec": 0 00:12:12.596 }, 00:12:12.596 "claimed": false, 00:12:12.596 "zoned": false, 00:12:12.596 "supported_io_types": { 00:12:12.596 "read": true, 00:12:12.596 "write": true, 00:12:12.596 "unmap": false, 00:12:12.596 "flush": false, 00:12:12.596 "reset": true, 00:12:12.596 "nvme_admin": false, 00:12:12.596 "nvme_io": false, 00:12:12.596 "nvme_io_md": false, 00:12:12.596 "write_zeroes": true, 00:12:12.596 "zcopy": false, 00:12:12.596 "get_zone_info": false, 00:12:12.596 "zone_management": false, 00:12:12.596 "zone_append": false, 00:12:12.596 "compare": false, 00:12:12.596 "compare_and_write": false, 00:12:12.596 "abort": false, 00:12:12.596 "seek_hole": false, 00:12:12.596 "seek_data": false, 00:12:12.596 "copy": false, 00:12:12.596 "nvme_iov_md": false 00:12:12.596 }, 00:12:12.596 "memory_domains": [ 00:12:12.596 { 00:12:12.596 "dma_device_id": "system", 00:12:12.596 "dma_device_type": 1 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.596 "dma_device_type": 2 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "dma_device_id": "system", 00:12:12.596 "dma_device_type": 1 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.596 "dma_device_type": 2 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "dma_device_id": "system", 00:12:12.596 "dma_device_type": 1 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.596 "dma_device_type": 2 00:12:12.596 } 00:12:12.596 ], 00:12:12.596 "driver_specific": { 00:12:12.596 "raid": { 00:12:12.596 "uuid": "ad8d530a-960a-4246-8895-d5d818d08c2a", 00:12:12.596 "strip_size_kb": 0, 00:12:12.596 "state": "online", 00:12:12.596 "raid_level": "raid1", 00:12:12.596 "superblock": false, 00:12:12.596 "num_base_bdevs": 3, 00:12:12.596 "num_base_bdevs_discovered": 3, 00:12:12.596 "num_base_bdevs_operational": 3, 00:12:12.596 "base_bdevs_list": [ 00:12:12.596 { 00:12:12.596 "name": "NewBaseBdev", 00:12:12.596 "uuid": "81913b52-d0c7-4494-834f-640d233b96d6", 00:12:12.596 "is_configured": true, 00:12:12.596 "data_offset": 0, 00:12:12.596 "data_size": 65536 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "name": "BaseBdev2", 00:12:12.596 "uuid": "eaab2b24-433b-4f3b-bb55-342af21177bd", 00:12:12.596 "is_configured": true, 00:12:12.596 "data_offset": 0, 00:12:12.596 "data_size": 65536 00:12:12.596 }, 00:12:12.596 { 00:12:12.596 "name": "BaseBdev3", 00:12:12.596 "uuid": "ba6c58d2-aaf7-4420-b672-124a505baab3", 00:12:12.596 "is_configured": true, 00:12:12.596 "data_offset": 0, 00:12:12.596 "data_size": 65536 00:12:12.596 } 00:12:12.596 ] 00:12:12.596 } 00:12:12.596 } 00:12:12.596 }' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:12.596 BaseBdev2 00:12:12.596 BaseBdev3' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.596 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.597 [2024-09-27 22:29:08.459334] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.597 [2024-09-27 22:29:08.459377] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.597 [2024-09-27 22:29:08.459459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.597 [2024-09-27 22:29:08.459791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.597 [2024-09-27 22:29:08.459806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67976 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67976 ']' 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67976 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:12.597 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.855 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67976 00:12:12.855 killing process with pid 67976 00:12:12.855 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.855 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.855 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67976' 00:12:12.855 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67976 00:12:12.856 [2024-09-27 22:29:08.512442] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.856 22:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67976 00:12:13.177 [2024-09-27 22:29:08.846734] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.713 22:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:15.713 00:12:15.713 real 0m11.819s 00:12:15.713 user 0m17.872s 00:12:15.713 sys 0m2.112s 00:12:15.713 22:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.713 ************************************ 00:12:15.713 END TEST raid_state_function_test 00:12:15.713 ************************************ 00:12:15.713 22:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.713 22:29:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:15.713 22:29:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:15.713 22:29:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.713 22:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.713 ************************************ 00:12:15.713 START TEST raid_state_function_test_sb 00:12:15.713 ************************************ 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68611 00:12:15.713 Process raid pid: 68611 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68611' 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68611 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68611 ']' 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.713 22:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.713 [2024-09-27 22:29:11.184974] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:12:15.713 [2024-09-27 22:29:11.185144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.713 [2024-09-27 22:29:11.352091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.972 [2024-09-27 22:29:11.603872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.231 [2024-09-27 22:29:11.866689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.231 [2024-09-27 22:29:11.866744] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.801 [2024-09-27 22:29:12.382319] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.801 [2024-09-27 22:29:12.382387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.801 [2024-09-27 22:29:12.382399] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.801 [2024-09-27 22:29:12.382414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.801 [2024-09-27 22:29:12.382422] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:16.801 [2024-09-27 22:29:12.382437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.801 "name": "Existed_Raid", 00:12:16.801 "uuid": "703c3ba5-fd01-432f-b906-744331e91e78", 00:12:16.801 "strip_size_kb": 0, 00:12:16.801 "state": "configuring", 00:12:16.801 "raid_level": "raid1", 00:12:16.801 "superblock": true, 00:12:16.801 "num_base_bdevs": 3, 00:12:16.801 "num_base_bdevs_discovered": 0, 00:12:16.801 "num_base_bdevs_operational": 3, 00:12:16.801 "base_bdevs_list": [ 00:12:16.801 { 00:12:16.801 "name": "BaseBdev1", 00:12:16.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.801 "is_configured": false, 00:12:16.801 "data_offset": 0, 00:12:16.801 "data_size": 0 00:12:16.801 }, 00:12:16.801 { 00:12:16.801 "name": "BaseBdev2", 00:12:16.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.801 "is_configured": false, 00:12:16.801 "data_offset": 0, 00:12:16.801 "data_size": 0 00:12:16.801 }, 00:12:16.801 { 00:12:16.801 "name": "BaseBdev3", 00:12:16.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.801 "is_configured": false, 00:12:16.801 "data_offset": 0, 00:12:16.801 "data_size": 0 00:12:16.801 } 00:12:16.801 ] 00:12:16.801 }' 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.801 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 [2024-09-27 22:29:12.805673] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.061 [2024-09-27 22:29:12.805724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 [2024-09-27 22:29:12.817706] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.061 [2024-09-27 22:29:12.817770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.061 [2024-09-27 22:29:12.817781] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.061 [2024-09-27 22:29:12.817795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.061 [2024-09-27 22:29:12.817803] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.061 [2024-09-27 22:29:12.817816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 [2024-09-27 22:29:12.875895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.061 BaseBdev1 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.061 [ 00:12:17.061 { 00:12:17.061 "name": "BaseBdev1", 00:12:17.061 "aliases": [ 00:12:17.061 "6891c073-67a5-4829-b3f0-a61c30de67f9" 00:12:17.061 ], 00:12:17.061 "product_name": "Malloc disk", 00:12:17.061 "block_size": 512, 00:12:17.061 "num_blocks": 65536, 00:12:17.061 "uuid": "6891c073-67a5-4829-b3f0-a61c30de67f9", 00:12:17.061 "assigned_rate_limits": { 00:12:17.061 "rw_ios_per_sec": 0, 00:12:17.061 "rw_mbytes_per_sec": 0, 00:12:17.061 "r_mbytes_per_sec": 0, 00:12:17.061 "w_mbytes_per_sec": 0 00:12:17.061 }, 00:12:17.061 "claimed": true, 00:12:17.061 "claim_type": "exclusive_write", 00:12:17.061 "zoned": false, 00:12:17.061 "supported_io_types": { 00:12:17.061 "read": true, 00:12:17.061 "write": true, 00:12:17.061 "unmap": true, 00:12:17.061 "flush": true, 00:12:17.061 "reset": true, 00:12:17.061 "nvme_admin": false, 00:12:17.061 "nvme_io": false, 00:12:17.061 "nvme_io_md": false, 00:12:17.061 "write_zeroes": true, 00:12:17.061 "zcopy": true, 00:12:17.061 "get_zone_info": false, 00:12:17.061 "zone_management": false, 00:12:17.061 "zone_append": false, 00:12:17.061 "compare": false, 00:12:17.061 "compare_and_write": false, 00:12:17.061 "abort": true, 00:12:17.061 "seek_hole": false, 00:12:17.061 "seek_data": false, 00:12:17.061 "copy": true, 00:12:17.061 "nvme_iov_md": false 00:12:17.061 }, 00:12:17.061 "memory_domains": [ 00:12:17.061 { 00:12:17.061 "dma_device_id": "system", 00:12:17.061 "dma_device_type": 1 00:12:17.061 }, 00:12:17.061 { 00:12:17.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.061 "dma_device_type": 2 00:12:17.061 } 00:12:17.061 ], 00:12:17.061 "driver_specific": {} 00:12:17.061 } 00:12:17.061 ] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.061 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.062 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.321 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.321 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.321 "name": "Existed_Raid", 00:12:17.321 "uuid": "360cc543-75cb-43af-93fc-091c564e527f", 00:12:17.321 "strip_size_kb": 0, 00:12:17.321 "state": "configuring", 00:12:17.321 "raid_level": "raid1", 00:12:17.321 "superblock": true, 00:12:17.321 "num_base_bdevs": 3, 00:12:17.321 "num_base_bdevs_discovered": 1, 00:12:17.321 "num_base_bdevs_operational": 3, 00:12:17.321 "base_bdevs_list": [ 00:12:17.321 { 00:12:17.321 "name": "BaseBdev1", 00:12:17.321 "uuid": "6891c073-67a5-4829-b3f0-a61c30de67f9", 00:12:17.321 "is_configured": true, 00:12:17.321 "data_offset": 2048, 00:12:17.321 "data_size": 63488 00:12:17.321 }, 00:12:17.321 { 00:12:17.321 "name": "BaseBdev2", 00:12:17.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.321 "is_configured": false, 00:12:17.321 "data_offset": 0, 00:12:17.321 "data_size": 0 00:12:17.321 }, 00:12:17.321 { 00:12:17.321 "name": "BaseBdev3", 00:12:17.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.321 "is_configured": false, 00:12:17.321 "data_offset": 0, 00:12:17.321 "data_size": 0 00:12:17.321 } 00:12:17.321 ] 00:12:17.321 }' 00:12:17.321 22:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.321 22:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.580 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.580 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.580 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.580 [2024-09-27 22:29:13.367352] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.580 [2024-09-27 22:29:13.367416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:17.580 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.581 [2024-09-27 22:29:13.375409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.581 [2024-09-27 22:29:13.377767] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.581 [2024-09-27 22:29:13.377827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.581 [2024-09-27 22:29:13.377840] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:17.581 [2024-09-27 22:29:13.377854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.581 "name": "Existed_Raid", 00:12:17.581 "uuid": "ac7c59c7-5567-4925-b31d-5a754c271903", 00:12:17.581 "strip_size_kb": 0, 00:12:17.581 "state": "configuring", 00:12:17.581 "raid_level": "raid1", 00:12:17.581 "superblock": true, 00:12:17.581 "num_base_bdevs": 3, 00:12:17.581 "num_base_bdevs_discovered": 1, 00:12:17.581 "num_base_bdevs_operational": 3, 00:12:17.581 "base_bdevs_list": [ 00:12:17.581 { 00:12:17.581 "name": "BaseBdev1", 00:12:17.581 "uuid": "6891c073-67a5-4829-b3f0-a61c30de67f9", 00:12:17.581 "is_configured": true, 00:12:17.581 "data_offset": 2048, 00:12:17.581 "data_size": 63488 00:12:17.581 }, 00:12:17.581 { 00:12:17.581 "name": "BaseBdev2", 00:12:17.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.581 "is_configured": false, 00:12:17.581 "data_offset": 0, 00:12:17.581 "data_size": 0 00:12:17.581 }, 00:12:17.581 { 00:12:17.581 "name": "BaseBdev3", 00:12:17.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.581 "is_configured": false, 00:12:17.581 "data_offset": 0, 00:12:17.581 "data_size": 0 00:12:17.581 } 00:12:17.581 ] 00:12:17.581 }' 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.581 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.150 [2024-09-27 22:29:13.885121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.150 BaseBdev2 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.150 [ 00:12:18.150 { 00:12:18.150 "name": "BaseBdev2", 00:12:18.150 "aliases": [ 00:12:18.150 "36ea7e42-3f9a-47d4-a1a0-8fea706ccc2e" 00:12:18.150 ], 00:12:18.150 "product_name": "Malloc disk", 00:12:18.150 "block_size": 512, 00:12:18.150 "num_blocks": 65536, 00:12:18.150 "uuid": "36ea7e42-3f9a-47d4-a1a0-8fea706ccc2e", 00:12:18.150 "assigned_rate_limits": { 00:12:18.150 "rw_ios_per_sec": 0, 00:12:18.150 "rw_mbytes_per_sec": 0, 00:12:18.150 "r_mbytes_per_sec": 0, 00:12:18.150 "w_mbytes_per_sec": 0 00:12:18.150 }, 00:12:18.150 "claimed": true, 00:12:18.150 "claim_type": "exclusive_write", 00:12:18.150 "zoned": false, 00:12:18.150 "supported_io_types": { 00:12:18.150 "read": true, 00:12:18.150 "write": true, 00:12:18.150 "unmap": true, 00:12:18.150 "flush": true, 00:12:18.150 "reset": true, 00:12:18.150 "nvme_admin": false, 00:12:18.150 "nvme_io": false, 00:12:18.150 "nvme_io_md": false, 00:12:18.150 "write_zeroes": true, 00:12:18.150 "zcopy": true, 00:12:18.150 "get_zone_info": false, 00:12:18.150 "zone_management": false, 00:12:18.150 "zone_append": false, 00:12:18.150 "compare": false, 00:12:18.150 "compare_and_write": false, 00:12:18.150 "abort": true, 00:12:18.150 "seek_hole": false, 00:12:18.150 "seek_data": false, 00:12:18.150 "copy": true, 00:12:18.150 "nvme_iov_md": false 00:12:18.150 }, 00:12:18.150 "memory_domains": [ 00:12:18.150 { 00:12:18.150 "dma_device_id": "system", 00:12:18.150 "dma_device_type": 1 00:12:18.150 }, 00:12:18.150 { 00:12:18.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.150 "dma_device_type": 2 00:12:18.150 } 00:12:18.150 ], 00:12:18.150 "driver_specific": {} 00:12:18.150 } 00:12:18.150 ] 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.150 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.150 "name": "Existed_Raid", 00:12:18.150 "uuid": "ac7c59c7-5567-4925-b31d-5a754c271903", 00:12:18.150 "strip_size_kb": 0, 00:12:18.150 "state": "configuring", 00:12:18.150 "raid_level": "raid1", 00:12:18.150 "superblock": true, 00:12:18.150 "num_base_bdevs": 3, 00:12:18.150 "num_base_bdevs_discovered": 2, 00:12:18.150 "num_base_bdevs_operational": 3, 00:12:18.150 "base_bdevs_list": [ 00:12:18.150 { 00:12:18.150 "name": "BaseBdev1", 00:12:18.150 "uuid": "6891c073-67a5-4829-b3f0-a61c30de67f9", 00:12:18.151 "is_configured": true, 00:12:18.151 "data_offset": 2048, 00:12:18.151 "data_size": 63488 00:12:18.151 }, 00:12:18.151 { 00:12:18.151 "name": "BaseBdev2", 00:12:18.151 "uuid": "36ea7e42-3f9a-47d4-a1a0-8fea706ccc2e", 00:12:18.151 "is_configured": true, 00:12:18.151 "data_offset": 2048, 00:12:18.151 "data_size": 63488 00:12:18.151 }, 00:12:18.151 { 00:12:18.151 "name": "BaseBdev3", 00:12:18.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.151 "is_configured": false, 00:12:18.151 "data_offset": 0, 00:12:18.151 "data_size": 0 00:12:18.151 } 00:12:18.151 ] 00:12:18.151 }' 00:12:18.151 22:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.151 22:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.719 [2024-09-27 22:29:14.404189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.719 [2024-09-27 22:29:14.404466] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:18.719 [2024-09-27 22:29:14.404493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.719 [2024-09-27 22:29:14.404781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:18.719 BaseBdev3 00:12:18.719 [2024-09-27 22:29:14.404946] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:18.719 [2024-09-27 22:29:14.404963] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:18.719 [2024-09-27 22:29:14.405154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.719 [ 00:12:18.719 { 00:12:18.719 "name": "BaseBdev3", 00:12:18.719 "aliases": [ 00:12:18.719 "8e66c076-489a-4b83-a56e-114f300862a4" 00:12:18.719 ], 00:12:18.719 "product_name": "Malloc disk", 00:12:18.719 "block_size": 512, 00:12:18.719 "num_blocks": 65536, 00:12:18.719 "uuid": "8e66c076-489a-4b83-a56e-114f300862a4", 00:12:18.719 "assigned_rate_limits": { 00:12:18.719 "rw_ios_per_sec": 0, 00:12:18.719 "rw_mbytes_per_sec": 0, 00:12:18.719 "r_mbytes_per_sec": 0, 00:12:18.719 "w_mbytes_per_sec": 0 00:12:18.719 }, 00:12:18.719 "claimed": true, 00:12:18.719 "claim_type": "exclusive_write", 00:12:18.719 "zoned": false, 00:12:18.719 "supported_io_types": { 00:12:18.719 "read": true, 00:12:18.719 "write": true, 00:12:18.719 "unmap": true, 00:12:18.719 "flush": true, 00:12:18.719 "reset": true, 00:12:18.719 "nvme_admin": false, 00:12:18.719 "nvme_io": false, 00:12:18.719 "nvme_io_md": false, 00:12:18.719 "write_zeroes": true, 00:12:18.719 "zcopy": true, 00:12:18.719 "get_zone_info": false, 00:12:18.719 "zone_management": false, 00:12:18.719 "zone_append": false, 00:12:18.719 "compare": false, 00:12:18.719 "compare_and_write": false, 00:12:18.719 "abort": true, 00:12:18.719 "seek_hole": false, 00:12:18.719 "seek_data": false, 00:12:18.719 "copy": true, 00:12:18.719 "nvme_iov_md": false 00:12:18.719 }, 00:12:18.719 "memory_domains": [ 00:12:18.719 { 00:12:18.719 "dma_device_id": "system", 00:12:18.719 "dma_device_type": 1 00:12:18.719 }, 00:12:18.719 { 00:12:18.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.719 "dma_device_type": 2 00:12:18.719 } 00:12:18.719 ], 00:12:18.719 "driver_specific": {} 00:12:18.719 } 00:12:18.719 ] 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.719 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.719 "name": "Existed_Raid", 00:12:18.719 "uuid": "ac7c59c7-5567-4925-b31d-5a754c271903", 00:12:18.719 "strip_size_kb": 0, 00:12:18.719 "state": "online", 00:12:18.719 "raid_level": "raid1", 00:12:18.719 "superblock": true, 00:12:18.719 "num_base_bdevs": 3, 00:12:18.719 "num_base_bdevs_discovered": 3, 00:12:18.719 "num_base_bdevs_operational": 3, 00:12:18.719 "base_bdevs_list": [ 00:12:18.719 { 00:12:18.720 "name": "BaseBdev1", 00:12:18.720 "uuid": "6891c073-67a5-4829-b3f0-a61c30de67f9", 00:12:18.720 "is_configured": true, 00:12:18.720 "data_offset": 2048, 00:12:18.720 "data_size": 63488 00:12:18.720 }, 00:12:18.720 { 00:12:18.720 "name": "BaseBdev2", 00:12:18.720 "uuid": "36ea7e42-3f9a-47d4-a1a0-8fea706ccc2e", 00:12:18.720 "is_configured": true, 00:12:18.720 "data_offset": 2048, 00:12:18.720 "data_size": 63488 00:12:18.720 }, 00:12:18.720 { 00:12:18.720 "name": "BaseBdev3", 00:12:18.720 "uuid": "8e66c076-489a-4b83-a56e-114f300862a4", 00:12:18.720 "is_configured": true, 00:12:18.720 "data_offset": 2048, 00:12:18.720 "data_size": 63488 00:12:18.720 } 00:12:18.720 ] 00:12:18.720 }' 00:12:18.720 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.720 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.307 [2024-09-27 22:29:14.907910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.307 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.307 "name": "Existed_Raid", 00:12:19.307 "aliases": [ 00:12:19.307 "ac7c59c7-5567-4925-b31d-5a754c271903" 00:12:19.307 ], 00:12:19.307 "product_name": "Raid Volume", 00:12:19.307 "block_size": 512, 00:12:19.307 "num_blocks": 63488, 00:12:19.307 "uuid": "ac7c59c7-5567-4925-b31d-5a754c271903", 00:12:19.307 "assigned_rate_limits": { 00:12:19.307 "rw_ios_per_sec": 0, 00:12:19.307 "rw_mbytes_per_sec": 0, 00:12:19.307 "r_mbytes_per_sec": 0, 00:12:19.307 "w_mbytes_per_sec": 0 00:12:19.307 }, 00:12:19.307 "claimed": false, 00:12:19.307 "zoned": false, 00:12:19.307 "supported_io_types": { 00:12:19.307 "read": true, 00:12:19.307 "write": true, 00:12:19.307 "unmap": false, 00:12:19.307 "flush": false, 00:12:19.307 "reset": true, 00:12:19.307 "nvme_admin": false, 00:12:19.307 "nvme_io": false, 00:12:19.307 "nvme_io_md": false, 00:12:19.307 "write_zeroes": true, 00:12:19.307 "zcopy": false, 00:12:19.307 "get_zone_info": false, 00:12:19.307 "zone_management": false, 00:12:19.307 "zone_append": false, 00:12:19.307 "compare": false, 00:12:19.307 "compare_and_write": false, 00:12:19.307 "abort": false, 00:12:19.307 "seek_hole": false, 00:12:19.307 "seek_data": false, 00:12:19.307 "copy": false, 00:12:19.307 "nvme_iov_md": false 00:12:19.307 }, 00:12:19.307 "memory_domains": [ 00:12:19.307 { 00:12:19.307 "dma_device_id": "system", 00:12:19.307 "dma_device_type": 1 00:12:19.307 }, 00:12:19.307 { 00:12:19.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.307 "dma_device_type": 2 00:12:19.307 }, 00:12:19.307 { 00:12:19.307 "dma_device_id": "system", 00:12:19.307 "dma_device_type": 1 00:12:19.307 }, 00:12:19.307 { 00:12:19.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.307 "dma_device_type": 2 00:12:19.307 }, 00:12:19.307 { 00:12:19.307 "dma_device_id": "system", 00:12:19.307 "dma_device_type": 1 00:12:19.307 }, 00:12:19.307 { 00:12:19.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.307 "dma_device_type": 2 00:12:19.307 } 00:12:19.307 ], 00:12:19.307 "driver_specific": { 00:12:19.307 "raid": { 00:12:19.308 "uuid": "ac7c59c7-5567-4925-b31d-5a754c271903", 00:12:19.308 "strip_size_kb": 0, 00:12:19.308 "state": "online", 00:12:19.308 "raid_level": "raid1", 00:12:19.308 "superblock": true, 00:12:19.308 "num_base_bdevs": 3, 00:12:19.308 "num_base_bdevs_discovered": 3, 00:12:19.308 "num_base_bdevs_operational": 3, 00:12:19.308 "base_bdevs_list": [ 00:12:19.308 { 00:12:19.308 "name": "BaseBdev1", 00:12:19.308 "uuid": "6891c073-67a5-4829-b3f0-a61c30de67f9", 00:12:19.308 "is_configured": true, 00:12:19.308 "data_offset": 2048, 00:12:19.308 "data_size": 63488 00:12:19.308 }, 00:12:19.308 { 00:12:19.308 "name": "BaseBdev2", 00:12:19.308 "uuid": "36ea7e42-3f9a-47d4-a1a0-8fea706ccc2e", 00:12:19.308 "is_configured": true, 00:12:19.308 "data_offset": 2048, 00:12:19.308 "data_size": 63488 00:12:19.308 }, 00:12:19.308 { 00:12:19.308 "name": "BaseBdev3", 00:12:19.308 "uuid": "8e66c076-489a-4b83-a56e-114f300862a4", 00:12:19.308 "is_configured": true, 00:12:19.308 "data_offset": 2048, 00:12:19.308 "data_size": 63488 00:12:19.308 } 00:12:19.308 ] 00:12:19.308 } 00:12:19.308 } 00:12:19.308 }' 00:12:19.308 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.308 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:19.308 BaseBdev2 00:12:19.308 BaseBdev3' 00:12:19.308 22:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.308 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.308 [2024-09-27 22:29:15.147409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.567 "name": "Existed_Raid", 00:12:19.567 "uuid": "ac7c59c7-5567-4925-b31d-5a754c271903", 00:12:19.567 "strip_size_kb": 0, 00:12:19.567 "state": "online", 00:12:19.567 "raid_level": "raid1", 00:12:19.567 "superblock": true, 00:12:19.567 "num_base_bdevs": 3, 00:12:19.567 "num_base_bdevs_discovered": 2, 00:12:19.567 "num_base_bdevs_operational": 2, 00:12:19.567 "base_bdevs_list": [ 00:12:19.567 { 00:12:19.567 "name": null, 00:12:19.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.567 "is_configured": false, 00:12:19.567 "data_offset": 0, 00:12:19.567 "data_size": 63488 00:12:19.567 }, 00:12:19.567 { 00:12:19.567 "name": "BaseBdev2", 00:12:19.567 "uuid": "36ea7e42-3f9a-47d4-a1a0-8fea706ccc2e", 00:12:19.567 "is_configured": true, 00:12:19.567 "data_offset": 2048, 00:12:19.567 "data_size": 63488 00:12:19.567 }, 00:12:19.567 { 00:12:19.567 "name": "BaseBdev3", 00:12:19.567 "uuid": "8e66c076-489a-4b83-a56e-114f300862a4", 00:12:19.567 "is_configured": true, 00:12:19.567 "data_offset": 2048, 00:12:19.567 "data_size": 63488 00:12:19.567 } 00:12:19.567 ] 00:12:19.567 }' 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.567 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.137 [2024-09-27 22:29:15.775590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.137 22:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.137 [2024-09-27 22:29:15.942198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.137 [2024-09-27 22:29:15.942450] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.397 [2024-09-27 22:29:16.048419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.397 [2024-09-27 22:29:16.048688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.397 [2024-09-27 22:29:16.048716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.397 BaseBdev2 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.397 [ 00:12:20.397 { 00:12:20.397 "name": "BaseBdev2", 00:12:20.397 "aliases": [ 00:12:20.397 "89e55eed-76e4-4d61-87a6-449d5dd45b8e" 00:12:20.397 ], 00:12:20.397 "product_name": "Malloc disk", 00:12:20.397 "block_size": 512, 00:12:20.397 "num_blocks": 65536, 00:12:20.397 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:20.397 "assigned_rate_limits": { 00:12:20.397 "rw_ios_per_sec": 0, 00:12:20.397 "rw_mbytes_per_sec": 0, 00:12:20.397 "r_mbytes_per_sec": 0, 00:12:20.397 "w_mbytes_per_sec": 0 00:12:20.397 }, 00:12:20.397 "claimed": false, 00:12:20.397 "zoned": false, 00:12:20.397 "supported_io_types": { 00:12:20.397 "read": true, 00:12:20.397 "write": true, 00:12:20.397 "unmap": true, 00:12:20.397 "flush": true, 00:12:20.397 "reset": true, 00:12:20.397 "nvme_admin": false, 00:12:20.397 "nvme_io": false, 00:12:20.397 "nvme_io_md": false, 00:12:20.397 "write_zeroes": true, 00:12:20.397 "zcopy": true, 00:12:20.397 "get_zone_info": false, 00:12:20.397 "zone_management": false, 00:12:20.397 "zone_append": false, 00:12:20.397 "compare": false, 00:12:20.397 "compare_and_write": false, 00:12:20.397 "abort": true, 00:12:20.397 "seek_hole": false, 00:12:20.397 "seek_data": false, 00:12:20.397 "copy": true, 00:12:20.397 "nvme_iov_md": false 00:12:20.397 }, 00:12:20.397 "memory_domains": [ 00:12:20.397 { 00:12:20.397 "dma_device_id": "system", 00:12:20.397 "dma_device_type": 1 00:12:20.397 }, 00:12:20.397 { 00:12:20.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.397 "dma_device_type": 2 00:12:20.397 } 00:12:20.397 ], 00:12:20.397 "driver_specific": {} 00:12:20.397 } 00:12:20.397 ] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.397 BaseBdev3 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.397 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.656 [ 00:12:20.656 { 00:12:20.656 "name": "BaseBdev3", 00:12:20.656 "aliases": [ 00:12:20.656 "2b4ccf49-547f-4235-b5c9-d93159a16005" 00:12:20.656 ], 00:12:20.656 "product_name": "Malloc disk", 00:12:20.656 "block_size": 512, 00:12:20.656 "num_blocks": 65536, 00:12:20.656 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:20.656 "assigned_rate_limits": { 00:12:20.656 "rw_ios_per_sec": 0, 00:12:20.656 "rw_mbytes_per_sec": 0, 00:12:20.656 "r_mbytes_per_sec": 0, 00:12:20.656 "w_mbytes_per_sec": 0 00:12:20.656 }, 00:12:20.656 "claimed": false, 00:12:20.656 "zoned": false, 00:12:20.656 "supported_io_types": { 00:12:20.656 "read": true, 00:12:20.656 "write": true, 00:12:20.656 "unmap": true, 00:12:20.656 "flush": true, 00:12:20.656 "reset": true, 00:12:20.656 "nvme_admin": false, 00:12:20.656 "nvme_io": false, 00:12:20.656 "nvme_io_md": false, 00:12:20.656 "write_zeroes": true, 00:12:20.657 "zcopy": true, 00:12:20.657 "get_zone_info": false, 00:12:20.657 "zone_management": false, 00:12:20.657 "zone_append": false, 00:12:20.657 "compare": false, 00:12:20.657 "compare_and_write": false, 00:12:20.657 "abort": true, 00:12:20.657 "seek_hole": false, 00:12:20.657 "seek_data": false, 00:12:20.657 "copy": true, 00:12:20.657 "nvme_iov_md": false 00:12:20.657 }, 00:12:20.657 "memory_domains": [ 00:12:20.657 { 00:12:20.657 "dma_device_id": "system", 00:12:20.657 "dma_device_type": 1 00:12:20.657 }, 00:12:20.657 { 00:12:20.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.657 "dma_device_type": 2 00:12:20.657 } 00:12:20.657 ], 00:12:20.657 "driver_specific": {} 00:12:20.657 } 00:12:20.657 ] 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.657 [2024-09-27 22:29:16.301359] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.657 [2024-09-27 22:29:16.301419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.657 [2024-09-27 22:29:16.301446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.657 [2024-09-27 22:29:16.304016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.657 "name": "Existed_Raid", 00:12:20.657 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:20.657 "strip_size_kb": 0, 00:12:20.657 "state": "configuring", 00:12:20.657 "raid_level": "raid1", 00:12:20.657 "superblock": true, 00:12:20.657 "num_base_bdevs": 3, 00:12:20.657 "num_base_bdevs_discovered": 2, 00:12:20.657 "num_base_bdevs_operational": 3, 00:12:20.657 "base_bdevs_list": [ 00:12:20.657 { 00:12:20.657 "name": "BaseBdev1", 00:12:20.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.657 "is_configured": false, 00:12:20.657 "data_offset": 0, 00:12:20.657 "data_size": 0 00:12:20.657 }, 00:12:20.657 { 00:12:20.657 "name": "BaseBdev2", 00:12:20.657 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:20.657 "is_configured": true, 00:12:20.657 "data_offset": 2048, 00:12:20.657 "data_size": 63488 00:12:20.657 }, 00:12:20.657 { 00:12:20.657 "name": "BaseBdev3", 00:12:20.657 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:20.657 "is_configured": true, 00:12:20.657 "data_offset": 2048, 00:12:20.657 "data_size": 63488 00:12:20.657 } 00:12:20.657 ] 00:12:20.657 }' 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.657 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.917 [2024-09-27 22:29:16.784712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.917 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.176 "name": "Existed_Raid", 00:12:21.176 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:21.176 "strip_size_kb": 0, 00:12:21.176 "state": "configuring", 00:12:21.176 "raid_level": "raid1", 00:12:21.176 "superblock": true, 00:12:21.176 "num_base_bdevs": 3, 00:12:21.176 "num_base_bdevs_discovered": 1, 00:12:21.176 "num_base_bdevs_operational": 3, 00:12:21.176 "base_bdevs_list": [ 00:12:21.176 { 00:12:21.176 "name": "BaseBdev1", 00:12:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.176 "is_configured": false, 00:12:21.176 "data_offset": 0, 00:12:21.176 "data_size": 0 00:12:21.176 }, 00:12:21.176 { 00:12:21.176 "name": null, 00:12:21.176 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:21.176 "is_configured": false, 00:12:21.176 "data_offset": 0, 00:12:21.176 "data_size": 63488 00:12:21.176 }, 00:12:21.176 { 00:12:21.176 "name": "BaseBdev3", 00:12:21.176 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:21.176 "is_configured": true, 00:12:21.176 "data_offset": 2048, 00:12:21.176 "data_size": 63488 00:12:21.176 } 00:12:21.176 ] 00:12:21.176 }' 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.176 22:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.435 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.695 [2024-09-27 22:29:17.335022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.695 BaseBdev1 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.695 [ 00:12:21.695 { 00:12:21.695 "name": "BaseBdev1", 00:12:21.695 "aliases": [ 00:12:21.695 "7c971c80-e7aa-4a64-b0c9-72d80a581c77" 00:12:21.695 ], 00:12:21.695 "product_name": "Malloc disk", 00:12:21.695 "block_size": 512, 00:12:21.695 "num_blocks": 65536, 00:12:21.695 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:21.695 "assigned_rate_limits": { 00:12:21.695 "rw_ios_per_sec": 0, 00:12:21.695 "rw_mbytes_per_sec": 0, 00:12:21.695 "r_mbytes_per_sec": 0, 00:12:21.695 "w_mbytes_per_sec": 0 00:12:21.695 }, 00:12:21.695 "claimed": true, 00:12:21.695 "claim_type": "exclusive_write", 00:12:21.695 "zoned": false, 00:12:21.695 "supported_io_types": { 00:12:21.695 "read": true, 00:12:21.695 "write": true, 00:12:21.695 "unmap": true, 00:12:21.695 "flush": true, 00:12:21.695 "reset": true, 00:12:21.695 "nvme_admin": false, 00:12:21.695 "nvme_io": false, 00:12:21.695 "nvme_io_md": false, 00:12:21.695 "write_zeroes": true, 00:12:21.695 "zcopy": true, 00:12:21.695 "get_zone_info": false, 00:12:21.695 "zone_management": false, 00:12:21.695 "zone_append": false, 00:12:21.695 "compare": false, 00:12:21.695 "compare_and_write": false, 00:12:21.695 "abort": true, 00:12:21.695 "seek_hole": false, 00:12:21.695 "seek_data": false, 00:12:21.695 "copy": true, 00:12:21.695 "nvme_iov_md": false 00:12:21.695 }, 00:12:21.695 "memory_domains": [ 00:12:21.695 { 00:12:21.695 "dma_device_id": "system", 00:12:21.695 "dma_device_type": 1 00:12:21.695 }, 00:12:21.695 { 00:12:21.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.695 "dma_device_type": 2 00:12:21.695 } 00:12:21.695 ], 00:12:21.695 "driver_specific": {} 00:12:21.695 } 00:12:21.695 ] 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.695 "name": "Existed_Raid", 00:12:21.695 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:21.695 "strip_size_kb": 0, 00:12:21.695 "state": "configuring", 00:12:21.695 "raid_level": "raid1", 00:12:21.695 "superblock": true, 00:12:21.695 "num_base_bdevs": 3, 00:12:21.695 "num_base_bdevs_discovered": 2, 00:12:21.695 "num_base_bdevs_operational": 3, 00:12:21.695 "base_bdevs_list": [ 00:12:21.695 { 00:12:21.695 "name": "BaseBdev1", 00:12:21.695 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:21.695 "is_configured": true, 00:12:21.695 "data_offset": 2048, 00:12:21.695 "data_size": 63488 00:12:21.695 }, 00:12:21.695 { 00:12:21.695 "name": null, 00:12:21.695 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:21.695 "is_configured": false, 00:12:21.695 "data_offset": 0, 00:12:21.695 "data_size": 63488 00:12:21.695 }, 00:12:21.695 { 00:12:21.695 "name": "BaseBdev3", 00:12:21.695 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:21.695 "is_configured": true, 00:12:21.695 "data_offset": 2048, 00:12:21.695 "data_size": 63488 00:12:21.695 } 00:12:21.695 ] 00:12:21.695 }' 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.695 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.955 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.955 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.955 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.955 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.955 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.214 [2024-09-27 22:29:17.846426] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.214 "name": "Existed_Raid", 00:12:22.214 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:22.214 "strip_size_kb": 0, 00:12:22.214 "state": "configuring", 00:12:22.214 "raid_level": "raid1", 00:12:22.214 "superblock": true, 00:12:22.214 "num_base_bdevs": 3, 00:12:22.214 "num_base_bdevs_discovered": 1, 00:12:22.214 "num_base_bdevs_operational": 3, 00:12:22.214 "base_bdevs_list": [ 00:12:22.214 { 00:12:22.214 "name": "BaseBdev1", 00:12:22.214 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:22.214 "is_configured": true, 00:12:22.214 "data_offset": 2048, 00:12:22.214 "data_size": 63488 00:12:22.214 }, 00:12:22.214 { 00:12:22.214 "name": null, 00:12:22.214 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:22.214 "is_configured": false, 00:12:22.214 "data_offset": 0, 00:12:22.214 "data_size": 63488 00:12:22.214 }, 00:12:22.214 { 00:12:22.214 "name": null, 00:12:22.214 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:22.214 "is_configured": false, 00:12:22.214 "data_offset": 0, 00:12:22.214 "data_size": 63488 00:12:22.214 } 00:12:22.214 ] 00:12:22.214 }' 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.214 22:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.474 [2024-09-27 22:29:18.309803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.474 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.733 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.733 "name": "Existed_Raid", 00:12:22.733 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:22.733 "strip_size_kb": 0, 00:12:22.733 "state": "configuring", 00:12:22.733 "raid_level": "raid1", 00:12:22.733 "superblock": true, 00:12:22.733 "num_base_bdevs": 3, 00:12:22.733 "num_base_bdevs_discovered": 2, 00:12:22.733 "num_base_bdevs_operational": 3, 00:12:22.733 "base_bdevs_list": [ 00:12:22.733 { 00:12:22.733 "name": "BaseBdev1", 00:12:22.733 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:22.733 "is_configured": true, 00:12:22.733 "data_offset": 2048, 00:12:22.733 "data_size": 63488 00:12:22.733 }, 00:12:22.733 { 00:12:22.733 "name": null, 00:12:22.733 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:22.733 "is_configured": false, 00:12:22.733 "data_offset": 0, 00:12:22.733 "data_size": 63488 00:12:22.733 }, 00:12:22.733 { 00:12:22.733 "name": "BaseBdev3", 00:12:22.733 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:22.733 "is_configured": true, 00:12:22.733 "data_offset": 2048, 00:12:22.733 "data_size": 63488 00:12:22.733 } 00:12:22.733 ] 00:12:22.733 }' 00:12:22.733 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.733 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.992 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.992 [2024-09-27 22:29:18.805206] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.252 "name": "Existed_Raid", 00:12:23.252 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:23.252 "strip_size_kb": 0, 00:12:23.252 "state": "configuring", 00:12:23.252 "raid_level": "raid1", 00:12:23.252 "superblock": true, 00:12:23.252 "num_base_bdevs": 3, 00:12:23.252 "num_base_bdevs_discovered": 1, 00:12:23.252 "num_base_bdevs_operational": 3, 00:12:23.252 "base_bdevs_list": [ 00:12:23.252 { 00:12:23.252 "name": null, 00:12:23.252 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:23.252 "is_configured": false, 00:12:23.252 "data_offset": 0, 00:12:23.252 "data_size": 63488 00:12:23.252 }, 00:12:23.252 { 00:12:23.252 "name": null, 00:12:23.252 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:23.252 "is_configured": false, 00:12:23.252 "data_offset": 0, 00:12:23.252 "data_size": 63488 00:12:23.252 }, 00:12:23.252 { 00:12:23.252 "name": "BaseBdev3", 00:12:23.252 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:23.252 "is_configured": true, 00:12:23.252 "data_offset": 2048, 00:12:23.252 "data_size": 63488 00:12:23.252 } 00:12:23.252 ] 00:12:23.252 }' 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.252 22:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.512 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.512 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.512 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.512 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.512 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 [2024-09-27 22:29:19.408651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.771 "name": "Existed_Raid", 00:12:23.771 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:23.771 "strip_size_kb": 0, 00:12:23.771 "state": "configuring", 00:12:23.771 "raid_level": "raid1", 00:12:23.771 "superblock": true, 00:12:23.771 "num_base_bdevs": 3, 00:12:23.771 "num_base_bdevs_discovered": 2, 00:12:23.771 "num_base_bdevs_operational": 3, 00:12:23.771 "base_bdevs_list": [ 00:12:23.771 { 00:12:23.771 "name": null, 00:12:23.771 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:23.771 "is_configured": false, 00:12:23.771 "data_offset": 0, 00:12:23.771 "data_size": 63488 00:12:23.771 }, 00:12:23.771 { 00:12:23.771 "name": "BaseBdev2", 00:12:23.771 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:23.771 "is_configured": true, 00:12:23.771 "data_offset": 2048, 00:12:23.771 "data_size": 63488 00:12:23.771 }, 00:12:23.771 { 00:12:23.771 "name": "BaseBdev3", 00:12:23.771 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:23.771 "is_configured": true, 00:12:23.771 "data_offset": 2048, 00:12:23.771 "data_size": 63488 00:12:23.771 } 00:12:23.771 ] 00:12:23.771 }' 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.771 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.029 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7c971c80-e7aa-4a64-b0c9-72d80a581c77 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.288 [2024-09-27 22:29:19.982796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:24.288 NewBaseBdev 00:12:24.288 [2024-09-27 22:29:19.983399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:24.288 [2024-09-27 22:29:19.983425] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.288 [2024-09-27 22:29:19.983717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:24.288 [2024-09-27 22:29:19.983884] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:24.288 [2024-09-27 22:29:19.983900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:24.288 [2024-09-27 22:29:19.984058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.288 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.289 22:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.289 [ 00:12:24.289 { 00:12:24.289 "name": "NewBaseBdev", 00:12:24.289 "aliases": [ 00:12:24.289 "7c971c80-e7aa-4a64-b0c9-72d80a581c77" 00:12:24.289 ], 00:12:24.289 "product_name": "Malloc disk", 00:12:24.289 "block_size": 512, 00:12:24.289 "num_blocks": 65536, 00:12:24.289 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:24.289 "assigned_rate_limits": { 00:12:24.289 "rw_ios_per_sec": 0, 00:12:24.289 "rw_mbytes_per_sec": 0, 00:12:24.289 "r_mbytes_per_sec": 0, 00:12:24.289 "w_mbytes_per_sec": 0 00:12:24.289 }, 00:12:24.289 "claimed": true, 00:12:24.289 "claim_type": "exclusive_write", 00:12:24.289 "zoned": false, 00:12:24.289 "supported_io_types": { 00:12:24.289 "read": true, 00:12:24.289 "write": true, 00:12:24.289 "unmap": true, 00:12:24.289 "flush": true, 00:12:24.289 "reset": true, 00:12:24.289 "nvme_admin": false, 00:12:24.289 "nvme_io": false, 00:12:24.289 "nvme_io_md": false, 00:12:24.289 "write_zeroes": true, 00:12:24.289 "zcopy": true, 00:12:24.289 "get_zone_info": false, 00:12:24.289 "zone_management": false, 00:12:24.289 "zone_append": false, 00:12:24.289 "compare": false, 00:12:24.289 "compare_and_write": false, 00:12:24.289 "abort": true, 00:12:24.289 "seek_hole": false, 00:12:24.289 "seek_data": false, 00:12:24.289 "copy": true, 00:12:24.289 "nvme_iov_md": false 00:12:24.289 }, 00:12:24.289 "memory_domains": [ 00:12:24.289 { 00:12:24.289 "dma_device_id": "system", 00:12:24.289 "dma_device_type": 1 00:12:24.289 }, 00:12:24.289 { 00:12:24.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.289 "dma_device_type": 2 00:12:24.289 } 00:12:24.289 ], 00:12:24.289 "driver_specific": {} 00:12:24.289 } 00:12:24.289 ] 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.289 "name": "Existed_Raid", 00:12:24.289 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:24.289 "strip_size_kb": 0, 00:12:24.289 "state": "online", 00:12:24.289 "raid_level": "raid1", 00:12:24.289 "superblock": true, 00:12:24.289 "num_base_bdevs": 3, 00:12:24.289 "num_base_bdevs_discovered": 3, 00:12:24.289 "num_base_bdevs_operational": 3, 00:12:24.289 "base_bdevs_list": [ 00:12:24.289 { 00:12:24.289 "name": "NewBaseBdev", 00:12:24.289 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:24.289 "is_configured": true, 00:12:24.289 "data_offset": 2048, 00:12:24.289 "data_size": 63488 00:12:24.289 }, 00:12:24.289 { 00:12:24.289 "name": "BaseBdev2", 00:12:24.289 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:24.289 "is_configured": true, 00:12:24.289 "data_offset": 2048, 00:12:24.289 "data_size": 63488 00:12:24.289 }, 00:12:24.289 { 00:12:24.289 "name": "BaseBdev3", 00:12:24.289 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:24.289 "is_configured": true, 00:12:24.289 "data_offset": 2048, 00:12:24.289 "data_size": 63488 00:12:24.289 } 00:12:24.289 ] 00:12:24.289 }' 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.289 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.860 [2024-09-27 22:29:20.462556] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:24.860 "name": "Existed_Raid", 00:12:24.860 "aliases": [ 00:12:24.860 "7fb1742f-fc5b-47e0-9775-c288b146c852" 00:12:24.860 ], 00:12:24.860 "product_name": "Raid Volume", 00:12:24.860 "block_size": 512, 00:12:24.860 "num_blocks": 63488, 00:12:24.860 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:24.860 "assigned_rate_limits": { 00:12:24.860 "rw_ios_per_sec": 0, 00:12:24.860 "rw_mbytes_per_sec": 0, 00:12:24.860 "r_mbytes_per_sec": 0, 00:12:24.860 "w_mbytes_per_sec": 0 00:12:24.860 }, 00:12:24.860 "claimed": false, 00:12:24.860 "zoned": false, 00:12:24.860 "supported_io_types": { 00:12:24.860 "read": true, 00:12:24.860 "write": true, 00:12:24.860 "unmap": false, 00:12:24.860 "flush": false, 00:12:24.860 "reset": true, 00:12:24.860 "nvme_admin": false, 00:12:24.860 "nvme_io": false, 00:12:24.860 "nvme_io_md": false, 00:12:24.860 "write_zeroes": true, 00:12:24.860 "zcopy": false, 00:12:24.860 "get_zone_info": false, 00:12:24.860 "zone_management": false, 00:12:24.860 "zone_append": false, 00:12:24.860 "compare": false, 00:12:24.860 "compare_and_write": false, 00:12:24.860 "abort": false, 00:12:24.860 "seek_hole": false, 00:12:24.860 "seek_data": false, 00:12:24.860 "copy": false, 00:12:24.860 "nvme_iov_md": false 00:12:24.860 }, 00:12:24.860 "memory_domains": [ 00:12:24.860 { 00:12:24.860 "dma_device_id": "system", 00:12:24.860 "dma_device_type": 1 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.860 "dma_device_type": 2 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "dma_device_id": "system", 00:12:24.860 "dma_device_type": 1 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.860 "dma_device_type": 2 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "dma_device_id": "system", 00:12:24.860 "dma_device_type": 1 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.860 "dma_device_type": 2 00:12:24.860 } 00:12:24.860 ], 00:12:24.860 "driver_specific": { 00:12:24.860 "raid": { 00:12:24.860 "uuid": "7fb1742f-fc5b-47e0-9775-c288b146c852", 00:12:24.860 "strip_size_kb": 0, 00:12:24.860 "state": "online", 00:12:24.860 "raid_level": "raid1", 00:12:24.860 "superblock": true, 00:12:24.860 "num_base_bdevs": 3, 00:12:24.860 "num_base_bdevs_discovered": 3, 00:12:24.860 "num_base_bdevs_operational": 3, 00:12:24.860 "base_bdevs_list": [ 00:12:24.860 { 00:12:24.860 "name": "NewBaseBdev", 00:12:24.860 "uuid": "7c971c80-e7aa-4a64-b0c9-72d80a581c77", 00:12:24.860 "is_configured": true, 00:12:24.860 "data_offset": 2048, 00:12:24.860 "data_size": 63488 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "name": "BaseBdev2", 00:12:24.860 "uuid": "89e55eed-76e4-4d61-87a6-449d5dd45b8e", 00:12:24.860 "is_configured": true, 00:12:24.860 "data_offset": 2048, 00:12:24.860 "data_size": 63488 00:12:24.860 }, 00:12:24.860 { 00:12:24.860 "name": "BaseBdev3", 00:12:24.860 "uuid": "2b4ccf49-547f-4235-b5c9-d93159a16005", 00:12:24.860 "is_configured": true, 00:12:24.860 "data_offset": 2048, 00:12:24.860 "data_size": 63488 00:12:24.860 } 00:12:24.860 ] 00:12:24.860 } 00:12:24.860 } 00:12:24.860 }' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:24.860 BaseBdev2 00:12:24.860 BaseBdev3' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.860 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.860 [2024-09-27 22:29:20.729870] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.860 [2024-09-27 22:29:20.730134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.860 [2024-09-27 22:29:20.730353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.860 [2024-09-27 22:29:20.730774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.860 [2024-09-27 22:29:20.730878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:24.861 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.861 22:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68611 00:12:24.861 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68611 ']' 00:12:24.861 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68611 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68611 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.120 killing process with pid 68611 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68611' 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68611 00:12:25.120 [2024-09-27 22:29:20.780040] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.120 22:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68611 00:12:25.378 [2024-09-27 22:29:21.117888] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.911 ************************************ 00:12:27.912 END TEST raid_state_function_test_sb 00:12:27.912 ************************************ 00:12:27.912 22:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:27.912 00:12:27.912 real 0m12.204s 00:12:27.912 user 0m18.414s 00:12:27.912 sys 0m2.246s 00:12:27.912 22:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.912 22:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 22:29:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:27.912 22:29:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:27.912 22:29:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.912 22:29:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 ************************************ 00:12:27.912 START TEST raid_superblock_test 00:12:27.912 ************************************ 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69253 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69253 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 69253 ']' 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.912 22:29:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.912 [2024-09-27 22:29:23.459905] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:12:27.912 [2024-09-27 22:29:23.460075] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69253 ] 00:12:27.912 [2024-09-27 22:29:23.632482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.171 [2024-09-27 22:29:23.907352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.433 [2024-09-27 22:29:24.166775] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.433 [2024-09-27 22:29:24.166818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.002 malloc1 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.002 [2024-09-27 22:29:24.733060] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.002 [2024-09-27 22:29:24.733324] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.002 [2024-09-27 22:29:24.733393] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:29.002 [2024-09-27 22:29:24.733477] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.002 [2024-09-27 22:29:24.736201] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.002 [2024-09-27 22:29:24.736374] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.002 pt1 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.002 malloc2 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.002 [2024-09-27 22:29:24.802605] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.002 [2024-09-27 22:29:24.802869] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.002 [2024-09-27 22:29:24.802945] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:29.002 [2024-09-27 22:29:24.803058] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.002 [2024-09-27 22:29:24.806110] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.002 [2024-09-27 22:29:24.806305] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.002 pt2 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:29.002 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.003 malloc3 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.003 [2024-09-27 22:29:24.871300] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.003 [2024-09-27 22:29:24.871391] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.003 [2024-09-27 22:29:24.871421] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:29.003 [2024-09-27 22:29:24.871434] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.003 [2024-09-27 22:29:24.874132] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.003 [2024-09-27 22:29:24.874187] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.003 pt3 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.003 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.261 [2024-09-27 22:29:24.883342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.261 [2024-09-27 22:29:24.885817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.261 [2024-09-27 22:29:24.886092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.261 [2024-09-27 22:29:24.886404] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:29.261 [2024-09-27 22:29:24.886509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.261 [2024-09-27 22:29:24.886925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:29.261 [2024-09-27 22:29:24.887267] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:29.261 [2024-09-27 22:29:24.887379] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:29.261 [2024-09-27 22:29:24.887724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.261 "name": "raid_bdev1", 00:12:29.261 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:29.261 "strip_size_kb": 0, 00:12:29.261 "state": "online", 00:12:29.261 "raid_level": "raid1", 00:12:29.261 "superblock": true, 00:12:29.261 "num_base_bdevs": 3, 00:12:29.261 "num_base_bdevs_discovered": 3, 00:12:29.261 "num_base_bdevs_operational": 3, 00:12:29.261 "base_bdevs_list": [ 00:12:29.261 { 00:12:29.261 "name": "pt1", 00:12:29.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.261 "is_configured": true, 00:12:29.261 "data_offset": 2048, 00:12:29.261 "data_size": 63488 00:12:29.261 }, 00:12:29.261 { 00:12:29.261 "name": "pt2", 00:12:29.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.261 "is_configured": true, 00:12:29.261 "data_offset": 2048, 00:12:29.261 "data_size": 63488 00:12:29.261 }, 00:12:29.261 { 00:12:29.261 "name": "pt3", 00:12:29.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.261 "is_configured": true, 00:12:29.261 "data_offset": 2048, 00:12:29.261 "data_size": 63488 00:12:29.261 } 00:12:29.261 ] 00:12:29.261 }' 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.261 22:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.520 [2024-09-27 22:29:25.355733] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.520 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.520 "name": "raid_bdev1", 00:12:29.520 "aliases": [ 00:12:29.520 "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52" 00:12:29.520 ], 00:12:29.520 "product_name": "Raid Volume", 00:12:29.520 "block_size": 512, 00:12:29.520 "num_blocks": 63488, 00:12:29.520 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:29.520 "assigned_rate_limits": { 00:12:29.520 "rw_ios_per_sec": 0, 00:12:29.520 "rw_mbytes_per_sec": 0, 00:12:29.520 "r_mbytes_per_sec": 0, 00:12:29.520 "w_mbytes_per_sec": 0 00:12:29.520 }, 00:12:29.520 "claimed": false, 00:12:29.520 "zoned": false, 00:12:29.520 "supported_io_types": { 00:12:29.520 "read": true, 00:12:29.520 "write": true, 00:12:29.520 "unmap": false, 00:12:29.520 "flush": false, 00:12:29.520 "reset": true, 00:12:29.520 "nvme_admin": false, 00:12:29.520 "nvme_io": false, 00:12:29.520 "nvme_io_md": false, 00:12:29.520 "write_zeroes": true, 00:12:29.520 "zcopy": false, 00:12:29.520 "get_zone_info": false, 00:12:29.520 "zone_management": false, 00:12:29.520 "zone_append": false, 00:12:29.520 "compare": false, 00:12:29.520 "compare_and_write": false, 00:12:29.520 "abort": false, 00:12:29.520 "seek_hole": false, 00:12:29.520 "seek_data": false, 00:12:29.520 "copy": false, 00:12:29.520 "nvme_iov_md": false 00:12:29.520 }, 00:12:29.520 "memory_domains": [ 00:12:29.520 { 00:12:29.520 "dma_device_id": "system", 00:12:29.520 "dma_device_type": 1 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.520 "dma_device_type": 2 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "dma_device_id": "system", 00:12:29.520 "dma_device_type": 1 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.520 "dma_device_type": 2 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "dma_device_id": "system", 00:12:29.520 "dma_device_type": 1 00:12:29.520 }, 00:12:29.520 { 00:12:29.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.520 "dma_device_type": 2 00:12:29.520 } 00:12:29.520 ], 00:12:29.520 "driver_specific": { 00:12:29.520 "raid": { 00:12:29.520 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:29.520 "strip_size_kb": 0, 00:12:29.520 "state": "online", 00:12:29.520 "raid_level": "raid1", 00:12:29.520 "superblock": true, 00:12:29.520 "num_base_bdevs": 3, 00:12:29.520 "num_base_bdevs_discovered": 3, 00:12:29.520 "num_base_bdevs_operational": 3, 00:12:29.521 "base_bdevs_list": [ 00:12:29.521 { 00:12:29.521 "name": "pt1", 00:12:29.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.521 "is_configured": true, 00:12:29.521 "data_offset": 2048, 00:12:29.521 "data_size": 63488 00:12:29.521 }, 00:12:29.521 { 00:12:29.521 "name": "pt2", 00:12:29.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.521 "is_configured": true, 00:12:29.521 "data_offset": 2048, 00:12:29.521 "data_size": 63488 00:12:29.521 }, 00:12:29.521 { 00:12:29.521 "name": "pt3", 00:12:29.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.521 "is_configured": true, 00:12:29.521 "data_offset": 2048, 00:12:29.521 "data_size": 63488 00:12:29.521 } 00:12:29.521 ] 00:12:29.521 } 00:12:29.521 } 00:12:29.521 }' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:29.780 pt2 00:12:29.780 pt3' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.780 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.780 [2024-09-27 22:29:25.643687] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52 ']' 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.040 [2024-09-27 22:29:25.687368] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.040 [2024-09-27 22:29:25.687531] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.040 [2024-09-27 22:29:25.687693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.040 [2024-09-27 22:29:25.687866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.040 [2024-09-27 22:29:25.688027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.040 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.041 [2024-09-27 22:29:25.839415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:30.041 [2024-09-27 22:29:25.841876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:30.041 [2024-09-27 22:29:25.841940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:30.041 [2024-09-27 22:29:25.842014] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:30.041 [2024-09-27 22:29:25.842075] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:30.041 [2024-09-27 22:29:25.842098] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:30.041 [2024-09-27 22:29:25.842121] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.041 [2024-09-27 22:29:25.842133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:30.041 request: 00:12:30.041 { 00:12:30.041 "name": "raid_bdev1", 00:12:30.041 "raid_level": "raid1", 00:12:30.041 "base_bdevs": [ 00:12:30.041 "malloc1", 00:12:30.041 "malloc2", 00:12:30.041 "malloc3" 00:12:30.041 ], 00:12:30.041 "superblock": false, 00:12:30.041 "method": "bdev_raid_create", 00:12:30.041 "req_id": 1 00:12:30.041 } 00:12:30.041 Got JSON-RPC error response 00:12:30.041 response: 00:12:30.041 { 00:12:30.041 "code": -17, 00:12:30.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:30.041 } 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.041 [2024-09-27 22:29:25.907398] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.041 [2024-09-27 22:29:25.907628] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.041 [2024-09-27 22:29:25.907694] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:30.041 [2024-09-27 22:29:25.907774] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.041 [2024-09-27 22:29:25.910613] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.041 [2024-09-27 22:29:25.910660] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.041 [2024-09-27 22:29:25.910766] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:30.041 [2024-09-27 22:29:25.910831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.041 pt1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.041 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.300 "name": "raid_bdev1", 00:12:30.300 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:30.300 "strip_size_kb": 0, 00:12:30.300 "state": "configuring", 00:12:30.300 "raid_level": "raid1", 00:12:30.300 "superblock": true, 00:12:30.300 "num_base_bdevs": 3, 00:12:30.300 "num_base_bdevs_discovered": 1, 00:12:30.300 "num_base_bdevs_operational": 3, 00:12:30.300 "base_bdevs_list": [ 00:12:30.300 { 00:12:30.300 "name": "pt1", 00:12:30.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.300 "is_configured": true, 00:12:30.300 "data_offset": 2048, 00:12:30.300 "data_size": 63488 00:12:30.300 }, 00:12:30.300 { 00:12:30.300 "name": null, 00:12:30.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.300 "is_configured": false, 00:12:30.300 "data_offset": 2048, 00:12:30.300 "data_size": 63488 00:12:30.300 }, 00:12:30.300 { 00:12:30.300 "name": null, 00:12:30.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.300 "is_configured": false, 00:12:30.300 "data_offset": 2048, 00:12:30.300 "data_size": 63488 00:12:30.300 } 00:12:30.300 ] 00:12:30.300 }' 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.300 22:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.559 [2024-09-27 22:29:26.363428] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.559 [2024-09-27 22:29:26.363510] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.559 [2024-09-27 22:29:26.363538] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:30.559 [2024-09-27 22:29:26.363568] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.559 [2024-09-27 22:29:26.364078] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.559 [2024-09-27 22:29:26.364101] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.559 [2024-09-27 22:29:26.364198] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.559 [2024-09-27 22:29:26.364223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.559 pt2 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.559 [2024-09-27 22:29:26.375455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.559 "name": "raid_bdev1", 00:12:30.559 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:30.559 "strip_size_kb": 0, 00:12:30.559 "state": "configuring", 00:12:30.559 "raid_level": "raid1", 00:12:30.559 "superblock": true, 00:12:30.559 "num_base_bdevs": 3, 00:12:30.559 "num_base_bdevs_discovered": 1, 00:12:30.559 "num_base_bdevs_operational": 3, 00:12:30.559 "base_bdevs_list": [ 00:12:30.559 { 00:12:30.559 "name": "pt1", 00:12:30.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:30.559 "is_configured": true, 00:12:30.559 "data_offset": 2048, 00:12:30.559 "data_size": 63488 00:12:30.559 }, 00:12:30.559 { 00:12:30.559 "name": null, 00:12:30.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.559 "is_configured": false, 00:12:30.559 "data_offset": 0, 00:12:30.559 "data_size": 63488 00:12:30.559 }, 00:12:30.559 { 00:12:30.559 "name": null, 00:12:30.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.559 "is_configured": false, 00:12:30.559 "data_offset": 2048, 00:12:30.559 "data_size": 63488 00:12:30.559 } 00:12:30.559 ] 00:12:30.559 }' 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.559 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.127 [2024-09-27 22:29:26.819399] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.127 [2024-09-27 22:29:26.819692] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.127 [2024-09-27 22:29:26.819754] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:31.127 [2024-09-27 22:29:26.819848] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.127 [2024-09-27 22:29:26.820398] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.127 [2024-09-27 22:29:26.820434] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.127 [2024-09-27 22:29:26.820539] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:31.127 [2024-09-27 22:29:26.820576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.127 pt2 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.127 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.127 [2024-09-27 22:29:26.831415] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.128 [2024-09-27 22:29:26.831488] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.128 [2024-09-27 22:29:26.831511] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:31.128 [2024-09-27 22:29:26.831526] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.128 [2024-09-27 22:29:26.832023] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.128 [2024-09-27 22:29:26.832054] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.128 [2024-09-27 22:29:26.832139] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:31.128 [2024-09-27 22:29:26.832173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.128 [2024-09-27 22:29:26.832345] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:31.128 [2024-09-27 22:29:26.832361] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.128 [2024-09-27 22:29:26.832626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.128 [2024-09-27 22:29:26.832815] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:31.128 [2024-09-27 22:29:26.832826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:31.128 [2024-09-27 22:29:26.832975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.128 pt3 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.128 "name": "raid_bdev1", 00:12:31.128 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:31.128 "strip_size_kb": 0, 00:12:31.128 "state": "online", 00:12:31.128 "raid_level": "raid1", 00:12:31.128 "superblock": true, 00:12:31.128 "num_base_bdevs": 3, 00:12:31.128 "num_base_bdevs_discovered": 3, 00:12:31.128 "num_base_bdevs_operational": 3, 00:12:31.128 "base_bdevs_list": [ 00:12:31.128 { 00:12:31.128 "name": "pt1", 00:12:31.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.128 "is_configured": true, 00:12:31.128 "data_offset": 2048, 00:12:31.128 "data_size": 63488 00:12:31.128 }, 00:12:31.128 { 00:12:31.128 "name": "pt2", 00:12:31.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.128 "is_configured": true, 00:12:31.128 "data_offset": 2048, 00:12:31.128 "data_size": 63488 00:12:31.128 }, 00:12:31.128 { 00:12:31.128 "name": "pt3", 00:12:31.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.128 "is_configured": true, 00:12:31.128 "data_offset": 2048, 00:12:31.128 "data_size": 63488 00:12:31.128 } 00:12:31.128 ] 00:12:31.128 }' 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.128 22:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.695 [2024-09-27 22:29:27.307714] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.695 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.695 "name": "raid_bdev1", 00:12:31.695 "aliases": [ 00:12:31.695 "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52" 00:12:31.695 ], 00:12:31.695 "product_name": "Raid Volume", 00:12:31.695 "block_size": 512, 00:12:31.695 "num_blocks": 63488, 00:12:31.695 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:31.695 "assigned_rate_limits": { 00:12:31.695 "rw_ios_per_sec": 0, 00:12:31.695 "rw_mbytes_per_sec": 0, 00:12:31.695 "r_mbytes_per_sec": 0, 00:12:31.695 "w_mbytes_per_sec": 0 00:12:31.695 }, 00:12:31.695 "claimed": false, 00:12:31.695 "zoned": false, 00:12:31.695 "supported_io_types": { 00:12:31.696 "read": true, 00:12:31.696 "write": true, 00:12:31.696 "unmap": false, 00:12:31.696 "flush": false, 00:12:31.696 "reset": true, 00:12:31.696 "nvme_admin": false, 00:12:31.696 "nvme_io": false, 00:12:31.696 "nvme_io_md": false, 00:12:31.696 "write_zeroes": true, 00:12:31.696 "zcopy": false, 00:12:31.696 "get_zone_info": false, 00:12:31.696 "zone_management": false, 00:12:31.696 "zone_append": false, 00:12:31.696 "compare": false, 00:12:31.696 "compare_and_write": false, 00:12:31.696 "abort": false, 00:12:31.696 "seek_hole": false, 00:12:31.696 "seek_data": false, 00:12:31.696 "copy": false, 00:12:31.696 "nvme_iov_md": false 00:12:31.696 }, 00:12:31.696 "memory_domains": [ 00:12:31.696 { 00:12:31.696 "dma_device_id": "system", 00:12:31.696 "dma_device_type": 1 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.696 "dma_device_type": 2 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "dma_device_id": "system", 00:12:31.696 "dma_device_type": 1 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.696 "dma_device_type": 2 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "dma_device_id": "system", 00:12:31.696 "dma_device_type": 1 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.696 "dma_device_type": 2 00:12:31.696 } 00:12:31.696 ], 00:12:31.696 "driver_specific": { 00:12:31.696 "raid": { 00:12:31.696 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:31.696 "strip_size_kb": 0, 00:12:31.696 "state": "online", 00:12:31.696 "raid_level": "raid1", 00:12:31.696 "superblock": true, 00:12:31.696 "num_base_bdevs": 3, 00:12:31.696 "num_base_bdevs_discovered": 3, 00:12:31.696 "num_base_bdevs_operational": 3, 00:12:31.696 "base_bdevs_list": [ 00:12:31.696 { 00:12:31.696 "name": "pt1", 00:12:31.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.696 "is_configured": true, 00:12:31.696 "data_offset": 2048, 00:12:31.696 "data_size": 63488 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "name": "pt2", 00:12:31.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.696 "is_configured": true, 00:12:31.696 "data_offset": 2048, 00:12:31.696 "data_size": 63488 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "name": "pt3", 00:12:31.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.696 "is_configured": true, 00:12:31.696 "data_offset": 2048, 00:12:31.696 "data_size": 63488 00:12:31.696 } 00:12:31.696 ] 00:12:31.696 } 00:12:31.696 } 00:12:31.696 }' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:31.696 pt2 00:12:31.696 pt3' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.696 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.955 [2024-09-27 22:29:27.591695] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52 '!=' bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52 ']' 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.955 [2024-09-27 22:29:27.635459] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.955 "name": "raid_bdev1", 00:12:31.955 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:31.955 "strip_size_kb": 0, 00:12:31.955 "state": "online", 00:12:31.955 "raid_level": "raid1", 00:12:31.955 "superblock": true, 00:12:31.955 "num_base_bdevs": 3, 00:12:31.955 "num_base_bdevs_discovered": 2, 00:12:31.955 "num_base_bdevs_operational": 2, 00:12:31.955 "base_bdevs_list": [ 00:12:31.955 { 00:12:31.955 "name": null, 00:12:31.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.955 "is_configured": false, 00:12:31.955 "data_offset": 0, 00:12:31.955 "data_size": 63488 00:12:31.955 }, 00:12:31.955 { 00:12:31.955 "name": "pt2", 00:12:31.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.955 "is_configured": true, 00:12:31.955 "data_offset": 2048, 00:12:31.955 "data_size": 63488 00:12:31.955 }, 00:12:31.955 { 00:12:31.955 "name": "pt3", 00:12:31.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.955 "is_configured": true, 00:12:31.955 "data_offset": 2048, 00:12:31.955 "data_size": 63488 00:12:31.955 } 00:12:31.955 ] 00:12:31.955 }' 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.955 22:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.214 [2024-09-27 22:29:28.079383] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.214 [2024-09-27 22:29:28.079421] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.214 [2024-09-27 22:29:28.079502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.214 [2024-09-27 22:29:28.079565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.214 [2024-09-27 22:29:28.079584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:32.214 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.472 [2024-09-27 22:29:28.159389] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:32.472 [2024-09-27 22:29:28.159467] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.472 [2024-09-27 22:29:28.159488] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:32.472 [2024-09-27 22:29:28.159503] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.472 [2024-09-27 22:29:28.162224] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.472 [2024-09-27 22:29:28.162274] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:32.472 [2024-09-27 22:29:28.162365] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:32.472 [2024-09-27 22:29:28.162426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:32.472 pt2 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.472 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.472 "name": "raid_bdev1", 00:12:32.472 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:32.472 "strip_size_kb": 0, 00:12:32.472 "state": "configuring", 00:12:32.472 "raid_level": "raid1", 00:12:32.472 "superblock": true, 00:12:32.472 "num_base_bdevs": 3, 00:12:32.472 "num_base_bdevs_discovered": 1, 00:12:32.472 "num_base_bdevs_operational": 2, 00:12:32.472 "base_bdevs_list": [ 00:12:32.472 { 00:12:32.472 "name": null, 00:12:32.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.472 "is_configured": false, 00:12:32.472 "data_offset": 2048, 00:12:32.472 "data_size": 63488 00:12:32.472 }, 00:12:32.472 { 00:12:32.472 "name": "pt2", 00:12:32.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.472 "is_configured": true, 00:12:32.472 "data_offset": 2048, 00:12:32.472 "data_size": 63488 00:12:32.472 }, 00:12:32.472 { 00:12:32.473 "name": null, 00:12:32.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.473 "is_configured": false, 00:12:32.473 "data_offset": 2048, 00:12:32.473 "data_size": 63488 00:12:32.473 } 00:12:32.473 ] 00:12:32.473 }' 00:12:32.473 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.473 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.047 [2024-09-27 22:29:28.615389] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:33.047 [2024-09-27 22:29:28.615471] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.047 [2024-09-27 22:29:28.615495] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:33.047 [2024-09-27 22:29:28.615510] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.047 [2024-09-27 22:29:28.616015] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.047 [2024-09-27 22:29:28.616039] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:33.047 [2024-09-27 22:29:28.616127] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:33.047 [2024-09-27 22:29:28.616161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:33.047 [2024-09-27 22:29:28.616275] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:33.047 [2024-09-27 22:29:28.616288] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.047 [2024-09-27 22:29:28.616561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:33.047 [2024-09-27 22:29:28.616726] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:33.047 [2024-09-27 22:29:28.616737] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:33.047 [2024-09-27 22:29:28.616905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.047 pt3 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.047 "name": "raid_bdev1", 00:12:33.047 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:33.047 "strip_size_kb": 0, 00:12:33.047 "state": "online", 00:12:33.047 "raid_level": "raid1", 00:12:33.047 "superblock": true, 00:12:33.047 "num_base_bdevs": 3, 00:12:33.047 "num_base_bdevs_discovered": 2, 00:12:33.047 "num_base_bdevs_operational": 2, 00:12:33.047 "base_bdevs_list": [ 00:12:33.047 { 00:12:33.047 "name": null, 00:12:33.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.047 "is_configured": false, 00:12:33.047 "data_offset": 2048, 00:12:33.047 "data_size": 63488 00:12:33.047 }, 00:12:33.047 { 00:12:33.047 "name": "pt2", 00:12:33.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.047 "is_configured": true, 00:12:33.047 "data_offset": 2048, 00:12:33.047 "data_size": 63488 00:12:33.047 }, 00:12:33.047 { 00:12:33.047 "name": "pt3", 00:12:33.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.047 "is_configured": true, 00:12:33.047 "data_offset": 2048, 00:12:33.047 "data_size": 63488 00:12:33.047 } 00:12:33.047 ] 00:12:33.047 }' 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.047 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.316 22:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:33.316 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.316 22:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.316 [2024-09-27 22:29:28.999360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.316 [2024-09-27 22:29:28.999404] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.316 [2024-09-27 22:29:28.999486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.316 [2024-09-27 22:29:28.999554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.316 [2024-09-27 22:29:28.999567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:33.316 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.316 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 [2024-09-27 22:29:29.071432] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:33.317 [2024-09-27 22:29:29.071517] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.317 [2024-09-27 22:29:29.071543] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:33.317 [2024-09-27 22:29:29.071555] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.317 [2024-09-27 22:29:29.074304] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.317 [2024-09-27 22:29:29.074350] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:33.317 [2024-09-27 22:29:29.074454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:33.317 [2024-09-27 22:29:29.074506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:33.317 [2024-09-27 22:29:29.074633] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:33.317 [2024-09-27 22:29:29.074648] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.317 [2024-09-27 22:29:29.074670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:33.317 [2024-09-27 22:29:29.074729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.317 pt1 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.317 "name": "raid_bdev1", 00:12:33.317 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:33.317 "strip_size_kb": 0, 00:12:33.317 "state": "configuring", 00:12:33.317 "raid_level": "raid1", 00:12:33.317 "superblock": true, 00:12:33.317 "num_base_bdevs": 3, 00:12:33.317 "num_base_bdevs_discovered": 1, 00:12:33.317 "num_base_bdevs_operational": 2, 00:12:33.317 "base_bdevs_list": [ 00:12:33.317 { 00:12:33.317 "name": null, 00:12:33.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.317 "is_configured": false, 00:12:33.317 "data_offset": 2048, 00:12:33.317 "data_size": 63488 00:12:33.317 }, 00:12:33.317 { 00:12:33.317 "name": "pt2", 00:12:33.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.317 "is_configured": true, 00:12:33.317 "data_offset": 2048, 00:12:33.317 "data_size": 63488 00:12:33.317 }, 00:12:33.317 { 00:12:33.317 "name": null, 00:12:33.317 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.317 "is_configured": false, 00:12:33.317 "data_offset": 2048, 00:12:33.317 "data_size": 63488 00:12:33.317 } 00:12:33.317 ] 00:12:33.317 }' 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.317 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.883 [2024-09-27 22:29:29.587420] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:33.883 [2024-09-27 22:29:29.587501] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.883 [2024-09-27 22:29:29.587528] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:33.883 [2024-09-27 22:29:29.587540] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.883 [2024-09-27 22:29:29.588055] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.883 [2024-09-27 22:29:29.588078] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:33.883 [2024-09-27 22:29:29.588173] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:33.883 [2024-09-27 22:29:29.588225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:33.883 [2024-09-27 22:29:29.588360] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:33.883 [2024-09-27 22:29:29.588371] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.883 [2024-09-27 22:29:29.588669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:33.883 [2024-09-27 22:29:29.588827] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:33.883 [2024-09-27 22:29:29.588844] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:33.883 [2024-09-27 22:29:29.589015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.883 pt3 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.883 "name": "raid_bdev1", 00:12:33.883 "uuid": "bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52", 00:12:33.883 "strip_size_kb": 0, 00:12:33.883 "state": "online", 00:12:33.883 "raid_level": "raid1", 00:12:33.883 "superblock": true, 00:12:33.883 "num_base_bdevs": 3, 00:12:33.883 "num_base_bdevs_discovered": 2, 00:12:33.883 "num_base_bdevs_operational": 2, 00:12:33.883 "base_bdevs_list": [ 00:12:33.883 { 00:12:33.883 "name": null, 00:12:33.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.883 "is_configured": false, 00:12:33.883 "data_offset": 2048, 00:12:33.883 "data_size": 63488 00:12:33.883 }, 00:12:33.883 { 00:12:33.883 "name": "pt2", 00:12:33.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.883 "is_configured": true, 00:12:33.883 "data_offset": 2048, 00:12:33.883 "data_size": 63488 00:12:33.883 }, 00:12:33.883 { 00:12:33.883 "name": "pt3", 00:12:33.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.883 "is_configured": true, 00:12:33.883 "data_offset": 2048, 00:12:33.883 "data_size": 63488 00:12:33.883 } 00:12:33.883 ] 00:12:33.883 }' 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.883 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.142 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:34.142 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.142 22:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:34.142 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.142 22:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.142 22:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:34.142 22:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.142 22:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:34.142 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.142 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.400 [2024-09-27 22:29:30.019702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52 '!=' bbd3362b-bf5c-4a8d-b21b-68fb9c6eeb52 ']' 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69253 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 69253 ']' 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 69253 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69253 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.400 killing process with pid 69253 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69253' 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 69253 00:12:34.400 22:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 69253 00:12:34.400 [2024-09-27 22:29:30.097126] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.400 [2024-09-27 22:29:30.097229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.400 [2024-09-27 22:29:30.097297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.400 [2024-09-27 22:29:30.097312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:34.659 [2024-09-27 22:29:30.430349] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.191 22:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:37.191 00:12:37.191 real 0m9.191s 00:12:37.191 user 0m13.599s 00:12:37.191 sys 0m1.759s 00:12:37.191 22:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.191 ************************************ 00:12:37.191 END TEST raid_superblock_test 00:12:37.191 22:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.191 ************************************ 00:12:37.191 22:29:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:37.191 22:29:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:37.191 22:29:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.191 22:29:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.191 ************************************ 00:12:37.191 START TEST raid_read_error_test 00:12:37.191 ************************************ 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.191 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V95tKqYGYs 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69715 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69715 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69715 ']' 00:12:37.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.192 22:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.192 [2024-09-27 22:29:32.739116] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:12:37.192 [2024-09-27 22:29:32.739499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69715 ] 00:12:37.192 [2024-09-27 22:29:32.900145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.451 [2024-09-27 22:29:33.182947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.710 [2024-09-27 22:29:33.442736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.710 [2024-09-27 22:29:33.442786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.277 22:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.277 22:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:38.277 22:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.277 22:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.277 22:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 BaseBdev1_malloc 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 true 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 [2024-09-27 22:29:34.023348] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.277 [2024-09-27 22:29:34.023600] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.277 [2024-09-27 22:29:34.023639] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.277 [2024-09-27 22:29:34.023657] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.277 [2024-09-27 22:29:34.026712] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.277 BaseBdev1 00:12:38.277 [2024-09-27 22:29:34.026916] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 BaseBdev2_malloc 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 true 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 [2024-09-27 22:29:34.099593] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.277 [2024-09-27 22:29:34.099833] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.277 [2024-09-27 22:29:34.099895] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.277 [2024-09-27 22:29:34.099996] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.277 [2024-09-27 22:29:34.102689] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.277 BaseBdev2 00:12:38.277 [2024-09-27 22:29:34.102867] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.277 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 BaseBdev3_malloc 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 true 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 [2024-09-27 22:29:34.176774] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.536 [2024-09-27 22:29:34.177007] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.536 [2024-09-27 22:29:34.177069] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.536 [2024-09-27 22:29:34.177088] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.536 [2024-09-27 22:29:34.179694] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.536 [2024-09-27 22:29:34.179746] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.536 BaseBdev3 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 [2024-09-27 22:29:34.188849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.536 [2024-09-27 22:29:34.191179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.536 [2024-09-27 22:29:34.191330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.536 [2024-09-27 22:29:34.191676] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:38.536 [2024-09-27 22:29:34.191779] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.536 [2024-09-27 22:29:34.192143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:38.536 [2024-09-27 22:29:34.192437] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:38.536 [2024-09-27 22:29:34.192548] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:38.536 [2024-09-27 22:29:34.192882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.536 "name": "raid_bdev1", 00:12:38.536 "uuid": "506529e3-8017-4aba-89a6-ae87a198f69d", 00:12:38.536 "strip_size_kb": 0, 00:12:38.536 "state": "online", 00:12:38.536 "raid_level": "raid1", 00:12:38.536 "superblock": true, 00:12:38.536 "num_base_bdevs": 3, 00:12:38.536 "num_base_bdevs_discovered": 3, 00:12:38.536 "num_base_bdevs_operational": 3, 00:12:38.536 "base_bdevs_list": [ 00:12:38.536 { 00:12:38.536 "name": "BaseBdev1", 00:12:38.536 "uuid": "891a6e97-02fa-5770-b88a-39af2f0ff67e", 00:12:38.536 "is_configured": true, 00:12:38.536 "data_offset": 2048, 00:12:38.536 "data_size": 63488 00:12:38.536 }, 00:12:38.536 { 00:12:38.536 "name": "BaseBdev2", 00:12:38.536 "uuid": "50ee2d6c-89ae-5880-800d-7e2e41e5a7e6", 00:12:38.536 "is_configured": true, 00:12:38.536 "data_offset": 2048, 00:12:38.536 "data_size": 63488 00:12:38.536 }, 00:12:38.536 { 00:12:38.536 "name": "BaseBdev3", 00:12:38.536 "uuid": "3a4df66d-02ac-59b1-8c9e-ad1c9eb0d755", 00:12:38.536 "is_configured": true, 00:12:38.536 "data_offset": 2048, 00:12:38.536 "data_size": 63488 00:12:38.536 } 00:12:38.536 ] 00:12:38.536 }' 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.536 22:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.149 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.149 22:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.149 [2024-09-27 22:29:34.785658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.087 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.088 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.088 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.088 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.088 "name": "raid_bdev1", 00:12:40.088 "uuid": "506529e3-8017-4aba-89a6-ae87a198f69d", 00:12:40.088 "strip_size_kb": 0, 00:12:40.088 "state": "online", 00:12:40.088 "raid_level": "raid1", 00:12:40.088 "superblock": true, 00:12:40.088 "num_base_bdevs": 3, 00:12:40.088 "num_base_bdevs_discovered": 3, 00:12:40.088 "num_base_bdevs_operational": 3, 00:12:40.088 "base_bdevs_list": [ 00:12:40.088 { 00:12:40.088 "name": "BaseBdev1", 00:12:40.088 "uuid": "891a6e97-02fa-5770-b88a-39af2f0ff67e", 00:12:40.088 "is_configured": true, 00:12:40.088 "data_offset": 2048, 00:12:40.088 "data_size": 63488 00:12:40.088 }, 00:12:40.088 { 00:12:40.088 "name": "BaseBdev2", 00:12:40.088 "uuid": "50ee2d6c-89ae-5880-800d-7e2e41e5a7e6", 00:12:40.088 "is_configured": true, 00:12:40.088 "data_offset": 2048, 00:12:40.088 "data_size": 63488 00:12:40.088 }, 00:12:40.088 { 00:12:40.088 "name": "BaseBdev3", 00:12:40.088 "uuid": "3a4df66d-02ac-59b1-8c9e-ad1c9eb0d755", 00:12:40.088 "is_configured": true, 00:12:40.088 "data_offset": 2048, 00:12:40.088 "data_size": 63488 00:12:40.088 } 00:12:40.088 ] 00:12:40.088 }' 00:12:40.088 22:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.088 22:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.346 22:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.346 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.346 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.346 [2024-09-27 22:29:36.170386] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.346 [2024-09-27 22:29:36.170424] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.346 [2024-09-27 22:29:36.173415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.346 [2024-09-27 22:29:36.173575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.346 [2024-09-27 22:29:36.173724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.346 [2024-09-27 22:29:36.173881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:40.346 { 00:12:40.346 "results": [ 00:12:40.346 { 00:12:40.346 "job": "raid_bdev1", 00:12:40.346 "core_mask": "0x1", 00:12:40.346 "workload": "randrw", 00:12:40.346 "percentage": 50, 00:12:40.347 "status": "finished", 00:12:40.347 "queue_depth": 1, 00:12:40.347 "io_size": 131072, 00:12:40.347 "runtime": 1.384417, 00:12:40.347 "iops": 12304.818562615166, 00:12:40.347 "mibps": 1538.1023203268958, 00:12:40.347 "io_failed": 0, 00:12:40.347 "io_timeout": 0, 00:12:40.347 "avg_latency_us": 78.22921511699867, 00:12:40.347 "min_latency_us": 24.674698795180724, 00:12:40.347 "max_latency_us": 1546.2811244979919 00:12:40.347 } 00:12:40.347 ], 00:12:40.347 "core_count": 1 00:12:40.347 } 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69715 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69715 ']' 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69715 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.347 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69715 00:12:40.606 killing process with pid 69715 00:12:40.606 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.606 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.606 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69715' 00:12:40.606 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69715 00:12:40.606 [2024-09-27 22:29:36.240470] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.606 22:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69715 00:12:40.866 [2024-09-27 22:29:36.494836] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V95tKqYGYs 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:43.405 00:12:43.405 real 0m6.099s 00:12:43.405 user 0m6.952s 00:12:43.405 sys 0m0.743s 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.405 22:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 ************************************ 00:12:43.405 END TEST raid_read_error_test 00:12:43.405 ************************************ 00:12:43.405 22:29:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:43.405 22:29:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:43.405 22:29:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.405 22:29:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 ************************************ 00:12:43.405 START TEST raid_write_error_test 00:12:43.405 ************************************ 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pW6eATVAKj 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69872 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69872 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69872 ']' 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.405 22:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.405 [2024-09-27 22:29:38.924811] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:12:43.405 [2024-09-27 22:29:38.924956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69872 ] 00:12:43.405 [2024-09-27 22:29:39.100518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.665 [2024-09-27 22:29:39.352729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.925 [2024-09-27 22:29:39.613107] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.925 [2024-09-27 22:29:39.613149] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.493 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.493 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:44.493 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.493 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 BaseBdev1_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 true 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 [2024-09-27 22:29:40.187984] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.494 [2024-09-27 22:29:40.188240] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.494 [2024-09-27 22:29:40.188275] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:44.494 [2024-09-27 22:29:40.188292] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.494 [2024-09-27 22:29:40.191026] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.494 [2024-09-27 22:29:40.191073] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.494 BaseBdev1 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 BaseBdev2_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 true 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 [2024-09-27 22:29:40.265124] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:44.494 [2024-09-27 22:29:40.265335] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.494 [2024-09-27 22:29:40.265368] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:44.494 [2024-09-27 22:29:40.265384] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.494 [2024-09-27 22:29:40.268081] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.494 [2024-09-27 22:29:40.268129] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:44.494 BaseBdev2 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 BaseBdev3_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 true 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 [2024-09-27 22:29:40.342033] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:44.494 [2024-09-27 22:29:40.342285] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.494 [2024-09-27 22:29:40.342321] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:44.494 [2024-09-27 22:29:40.342354] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.494 [2024-09-27 22:29:40.345163] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.494 [2024-09-27 22:29:40.345219] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:44.494 BaseBdev3 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.494 [2024-09-27 22:29:40.354207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.494 [2024-09-27 22:29:40.356780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.494 [2024-09-27 22:29:40.357012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.494 [2024-09-27 22:29:40.357345] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:44.494 [2024-09-27 22:29:40.357366] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.494 [2024-09-27 22:29:40.357701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:44.494 [2024-09-27 22:29:40.357894] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:44.494 [2024-09-27 22:29:40.357910] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:44.494 [2024-09-27 22:29:40.358189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.494 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.754 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.754 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.754 "name": "raid_bdev1", 00:12:44.754 "uuid": "6280bd74-ccfc-4550-b3ed-9bf179f53650", 00:12:44.754 "strip_size_kb": 0, 00:12:44.754 "state": "online", 00:12:44.754 "raid_level": "raid1", 00:12:44.754 "superblock": true, 00:12:44.754 "num_base_bdevs": 3, 00:12:44.754 "num_base_bdevs_discovered": 3, 00:12:44.754 "num_base_bdevs_operational": 3, 00:12:44.754 "base_bdevs_list": [ 00:12:44.754 { 00:12:44.754 "name": "BaseBdev1", 00:12:44.754 "uuid": "2684b809-e045-5655-9218-f9d9ce98239a", 00:12:44.754 "is_configured": true, 00:12:44.754 "data_offset": 2048, 00:12:44.754 "data_size": 63488 00:12:44.754 }, 00:12:44.754 { 00:12:44.754 "name": "BaseBdev2", 00:12:44.754 "uuid": "a71b372c-9153-5c1b-b545-f1e69b5614f5", 00:12:44.754 "is_configured": true, 00:12:44.754 "data_offset": 2048, 00:12:44.754 "data_size": 63488 00:12:44.754 }, 00:12:44.754 { 00:12:44.754 "name": "BaseBdev3", 00:12:44.754 "uuid": "4bef87a0-27d2-5b7e-9ffc-66b147f2d219", 00:12:44.754 "is_configured": true, 00:12:44.754 "data_offset": 2048, 00:12:44.754 "data_size": 63488 00:12:44.754 } 00:12:44.754 ] 00:12:44.754 }' 00:12:44.754 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.754 22:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.014 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.014 22:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:45.274 [2024-09-27 22:29:40.947080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.210 [2024-09-27 22:29:41.835038] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:46.210 [2024-09-27 22:29:41.835097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.210 [2024-09-27 22:29:41.835336] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.210 "name": "raid_bdev1", 00:12:46.210 "uuid": "6280bd74-ccfc-4550-b3ed-9bf179f53650", 00:12:46.210 "strip_size_kb": 0, 00:12:46.210 "state": "online", 00:12:46.210 "raid_level": "raid1", 00:12:46.210 "superblock": true, 00:12:46.210 "num_base_bdevs": 3, 00:12:46.210 "num_base_bdevs_discovered": 2, 00:12:46.210 "num_base_bdevs_operational": 2, 00:12:46.210 "base_bdevs_list": [ 00:12:46.210 { 00:12:46.210 "name": null, 00:12:46.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.210 "is_configured": false, 00:12:46.210 "data_offset": 0, 00:12:46.210 "data_size": 63488 00:12:46.210 }, 00:12:46.210 { 00:12:46.210 "name": "BaseBdev2", 00:12:46.210 "uuid": "a71b372c-9153-5c1b-b545-f1e69b5614f5", 00:12:46.210 "is_configured": true, 00:12:46.210 "data_offset": 2048, 00:12:46.210 "data_size": 63488 00:12:46.210 }, 00:12:46.210 { 00:12:46.210 "name": "BaseBdev3", 00:12:46.210 "uuid": "4bef87a0-27d2-5b7e-9ffc-66b147f2d219", 00:12:46.210 "is_configured": true, 00:12:46.210 "data_offset": 2048, 00:12:46.210 "data_size": 63488 00:12:46.210 } 00:12:46.210 ] 00:12:46.210 }' 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.210 22:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.468 22:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.468 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.468 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.468 [2024-09-27 22:29:42.298178] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.468 [2024-09-27 22:29:42.298414] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.468 { 00:12:46.468 "results": [ 00:12:46.468 { 00:12:46.468 "job": "raid_bdev1", 00:12:46.468 "core_mask": "0x1", 00:12:46.468 "workload": "randrw", 00:12:46.468 "percentage": 50, 00:12:46.468 "status": "finished", 00:12:46.468 "queue_depth": 1, 00:12:46.468 "io_size": 131072, 00:12:46.468 "runtime": 1.350962, 00:12:46.468 "iops": 13496.308556421276, 00:12:46.468 "mibps": 1687.0385695526595, 00:12:46.468 "io_failed": 0, 00:12:46.468 "io_timeout": 0, 00:12:46.468 "avg_latency_us": 71.04747035088195, 00:12:46.468 "min_latency_us": 26.730923694779115, 00:12:46.468 "max_latency_us": 1559.4409638554216 00:12:46.468 } 00:12:46.468 ], 00:12:46.468 "core_count": 1 00:12:46.468 } 00:12:46.468 [2024-09-27 22:29:42.301250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.468 [2024-09-27 22:29:42.301302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.468 [2024-09-27 22:29:42.301386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.468 [2024-09-27 22:29:42.301403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69872 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69872 ']' 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69872 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69872 00:12:46.469 killing process with pid 69872 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69872' 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69872 00:12:46.469 [2024-09-27 22:29:42.344367] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.469 22:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69872 00:12:46.727 [2024-09-27 22:29:42.586671] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pW6eATVAKj 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:49.295 ************************************ 00:12:49.295 END TEST raid_write_error_test 00:12:49.295 ************************************ 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:49.295 00:12:49.295 real 0m6.010s 00:12:49.295 user 0m6.832s 00:12:49.295 sys 0m0.768s 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.295 22:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 22:29:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:49.295 22:29:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:49.295 22:29:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:49.295 22:29:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:49.295 22:29:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.295 22:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 ************************************ 00:12:49.295 START TEST raid_state_function_test 00:12:49.295 ************************************ 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:49.295 Process raid pid: 70032 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70032 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70032' 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70032 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 70032 ']' 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.295 22:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 [2024-09-27 22:29:45.006828] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:12:49.295 [2024-09-27 22:29:45.006989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.554 [2024-09-27 22:29:45.181563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.812 [2024-09-27 22:29:45.435069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.071 [2024-09-27 22:29:45.699115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.071 [2024-09-27 22:29:45.699418] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.641 [2024-09-27 22:29:46.218196] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.641 [2024-09-27 22:29:46.218269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.641 [2024-09-27 22:29:46.218281] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.641 [2024-09-27 22:29:46.218296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.641 [2024-09-27 22:29:46.218304] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.641 [2024-09-27 22:29:46.218319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.641 [2024-09-27 22:29:46.218328] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:50.641 [2024-09-27 22:29:46.218340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.641 "name": "Existed_Raid", 00:12:50.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.641 "strip_size_kb": 64, 00:12:50.641 "state": "configuring", 00:12:50.641 "raid_level": "raid0", 00:12:50.641 "superblock": false, 00:12:50.641 "num_base_bdevs": 4, 00:12:50.641 "num_base_bdevs_discovered": 0, 00:12:50.641 "num_base_bdevs_operational": 4, 00:12:50.641 "base_bdevs_list": [ 00:12:50.641 { 00:12:50.641 "name": "BaseBdev1", 00:12:50.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.641 "is_configured": false, 00:12:50.641 "data_offset": 0, 00:12:50.641 "data_size": 0 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "name": "BaseBdev2", 00:12:50.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.641 "is_configured": false, 00:12:50.641 "data_offset": 0, 00:12:50.641 "data_size": 0 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "name": "BaseBdev3", 00:12:50.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.641 "is_configured": false, 00:12:50.641 "data_offset": 0, 00:12:50.641 "data_size": 0 00:12:50.641 }, 00:12:50.641 { 00:12:50.641 "name": "BaseBdev4", 00:12:50.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.641 "is_configured": false, 00:12:50.641 "data_offset": 0, 00:12:50.641 "data_size": 0 00:12:50.641 } 00:12:50.641 ] 00:12:50.641 }' 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.641 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.900 [2024-09-27 22:29:46.689467] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.900 [2024-09-27 22:29:46.689691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.900 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.900 [2024-09-27 22:29:46.701477] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.900 [2024-09-27 22:29:46.701666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.900 [2024-09-27 22:29:46.701750] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.900 [2024-09-27 22:29:46.701796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.900 [2024-09-27 22:29:46.701827] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.900 [2024-09-27 22:29:46.701861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.901 [2024-09-27 22:29:46.701890] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:50.901 [2024-09-27 22:29:46.701923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.901 [2024-09-27 22:29:46.759104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.901 BaseBdev1 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.901 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.160 [ 00:12:51.160 { 00:12:51.160 "name": "BaseBdev1", 00:12:51.160 "aliases": [ 00:12:51.160 "da660b01-40e3-4a5d-ada0-d33a418982eb" 00:12:51.160 ], 00:12:51.160 "product_name": "Malloc disk", 00:12:51.160 "block_size": 512, 00:12:51.160 "num_blocks": 65536, 00:12:51.160 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:51.160 "assigned_rate_limits": { 00:12:51.160 "rw_ios_per_sec": 0, 00:12:51.160 "rw_mbytes_per_sec": 0, 00:12:51.160 "r_mbytes_per_sec": 0, 00:12:51.160 "w_mbytes_per_sec": 0 00:12:51.160 }, 00:12:51.160 "claimed": true, 00:12:51.160 "claim_type": "exclusive_write", 00:12:51.160 "zoned": false, 00:12:51.160 "supported_io_types": { 00:12:51.160 "read": true, 00:12:51.160 "write": true, 00:12:51.160 "unmap": true, 00:12:51.160 "flush": true, 00:12:51.160 "reset": true, 00:12:51.160 "nvme_admin": false, 00:12:51.160 "nvme_io": false, 00:12:51.160 "nvme_io_md": false, 00:12:51.160 "write_zeroes": true, 00:12:51.160 "zcopy": true, 00:12:51.160 "get_zone_info": false, 00:12:51.160 "zone_management": false, 00:12:51.160 "zone_append": false, 00:12:51.160 "compare": false, 00:12:51.160 "compare_and_write": false, 00:12:51.160 "abort": true, 00:12:51.160 "seek_hole": false, 00:12:51.160 "seek_data": false, 00:12:51.160 "copy": true, 00:12:51.160 "nvme_iov_md": false 00:12:51.160 }, 00:12:51.160 "memory_domains": [ 00:12:51.160 { 00:12:51.160 "dma_device_id": "system", 00:12:51.160 "dma_device_type": 1 00:12:51.160 }, 00:12:51.160 { 00:12:51.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.160 "dma_device_type": 2 00:12:51.160 } 00:12:51.160 ], 00:12:51.160 "driver_specific": {} 00:12:51.160 } 00:12:51.160 ] 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.160 "name": "Existed_Raid", 00:12:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.160 "strip_size_kb": 64, 00:12:51.160 "state": "configuring", 00:12:51.160 "raid_level": "raid0", 00:12:51.160 "superblock": false, 00:12:51.160 "num_base_bdevs": 4, 00:12:51.160 "num_base_bdevs_discovered": 1, 00:12:51.160 "num_base_bdevs_operational": 4, 00:12:51.160 "base_bdevs_list": [ 00:12:51.160 { 00:12:51.160 "name": "BaseBdev1", 00:12:51.160 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:51.160 "is_configured": true, 00:12:51.160 "data_offset": 0, 00:12:51.160 "data_size": 65536 00:12:51.160 }, 00:12:51.160 { 00:12:51.160 "name": "BaseBdev2", 00:12:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.160 "is_configured": false, 00:12:51.160 "data_offset": 0, 00:12:51.160 "data_size": 0 00:12:51.160 }, 00:12:51.160 { 00:12:51.160 "name": "BaseBdev3", 00:12:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.160 "is_configured": false, 00:12:51.160 "data_offset": 0, 00:12:51.160 "data_size": 0 00:12:51.160 }, 00:12:51.160 { 00:12:51.160 "name": "BaseBdev4", 00:12:51.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.160 "is_configured": false, 00:12:51.160 "data_offset": 0, 00:12:51.160 "data_size": 0 00:12:51.160 } 00:12:51.160 ] 00:12:51.160 }' 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.160 22:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.420 [2024-09-27 22:29:47.278447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.420 [2024-09-27 22:29:47.278509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.420 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.420 [2024-09-27 22:29:47.290502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.420 [2024-09-27 22:29:47.292877] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.420 [2024-09-27 22:29:47.292934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.420 [2024-09-27 22:29:47.292946] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.420 [2024-09-27 22:29:47.292962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.420 [2024-09-27 22:29:47.292985] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:51.420 [2024-09-27 22:29:47.292999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.679 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.679 "name": "Existed_Raid", 00:12:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.679 "strip_size_kb": 64, 00:12:51.679 "state": "configuring", 00:12:51.679 "raid_level": "raid0", 00:12:51.679 "superblock": false, 00:12:51.679 "num_base_bdevs": 4, 00:12:51.679 "num_base_bdevs_discovered": 1, 00:12:51.679 "num_base_bdevs_operational": 4, 00:12:51.679 "base_bdevs_list": [ 00:12:51.679 { 00:12:51.679 "name": "BaseBdev1", 00:12:51.679 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:51.679 "is_configured": true, 00:12:51.679 "data_offset": 0, 00:12:51.679 "data_size": 65536 00:12:51.679 }, 00:12:51.679 { 00:12:51.679 "name": "BaseBdev2", 00:12:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.679 "is_configured": false, 00:12:51.679 "data_offset": 0, 00:12:51.679 "data_size": 0 00:12:51.679 }, 00:12:51.679 { 00:12:51.679 "name": "BaseBdev3", 00:12:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.679 "is_configured": false, 00:12:51.679 "data_offset": 0, 00:12:51.679 "data_size": 0 00:12:51.679 }, 00:12:51.679 { 00:12:51.679 "name": "BaseBdev4", 00:12:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.679 "is_configured": false, 00:12:51.679 "data_offset": 0, 00:12:51.679 "data_size": 0 00:12:51.679 } 00:12:51.679 ] 00:12:51.679 }' 00:12:51.680 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.680 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 [2024-09-27 22:29:47.773190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.939 BaseBdev2 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.939 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.939 [ 00:12:51.939 { 00:12:51.939 "name": "BaseBdev2", 00:12:51.939 "aliases": [ 00:12:51.939 "bc2b79cd-2e42-4298-a529-0cbb9056886e" 00:12:51.939 ], 00:12:51.939 "product_name": "Malloc disk", 00:12:51.939 "block_size": 512, 00:12:51.939 "num_blocks": 65536, 00:12:51.939 "uuid": "bc2b79cd-2e42-4298-a529-0cbb9056886e", 00:12:51.939 "assigned_rate_limits": { 00:12:51.939 "rw_ios_per_sec": 0, 00:12:51.939 "rw_mbytes_per_sec": 0, 00:12:51.939 "r_mbytes_per_sec": 0, 00:12:51.939 "w_mbytes_per_sec": 0 00:12:51.939 }, 00:12:51.939 "claimed": true, 00:12:51.939 "claim_type": "exclusive_write", 00:12:51.939 "zoned": false, 00:12:51.939 "supported_io_types": { 00:12:51.939 "read": true, 00:12:51.939 "write": true, 00:12:51.939 "unmap": true, 00:12:51.939 "flush": true, 00:12:51.939 "reset": true, 00:12:51.939 "nvme_admin": false, 00:12:51.939 "nvme_io": false, 00:12:51.939 "nvme_io_md": false, 00:12:51.939 "write_zeroes": true, 00:12:51.939 "zcopy": true, 00:12:51.939 "get_zone_info": false, 00:12:51.939 "zone_management": false, 00:12:51.939 "zone_append": false, 00:12:51.939 "compare": false, 00:12:51.939 "compare_and_write": false, 00:12:51.939 "abort": true, 00:12:51.939 "seek_hole": false, 00:12:51.939 "seek_data": false, 00:12:51.939 "copy": true, 00:12:51.939 "nvme_iov_md": false 00:12:51.939 }, 00:12:51.939 "memory_domains": [ 00:12:51.939 { 00:12:51.939 "dma_device_id": "system", 00:12:52.198 "dma_device_type": 1 00:12:52.198 }, 00:12:52.198 { 00:12:52.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.198 "dma_device_type": 2 00:12:52.198 } 00:12:52.198 ], 00:12:52.198 "driver_specific": {} 00:12:52.198 } 00:12:52.198 ] 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.198 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.199 "name": "Existed_Raid", 00:12:52.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.199 "strip_size_kb": 64, 00:12:52.199 "state": "configuring", 00:12:52.199 "raid_level": "raid0", 00:12:52.199 "superblock": false, 00:12:52.199 "num_base_bdevs": 4, 00:12:52.199 "num_base_bdevs_discovered": 2, 00:12:52.199 "num_base_bdevs_operational": 4, 00:12:52.199 "base_bdevs_list": [ 00:12:52.199 { 00:12:52.199 "name": "BaseBdev1", 00:12:52.199 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:52.199 "is_configured": true, 00:12:52.199 "data_offset": 0, 00:12:52.199 "data_size": 65536 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "name": "BaseBdev2", 00:12:52.199 "uuid": "bc2b79cd-2e42-4298-a529-0cbb9056886e", 00:12:52.199 "is_configured": true, 00:12:52.199 "data_offset": 0, 00:12:52.199 "data_size": 65536 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "name": "BaseBdev3", 00:12:52.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.199 "is_configured": false, 00:12:52.199 "data_offset": 0, 00:12:52.199 "data_size": 0 00:12:52.199 }, 00:12:52.199 { 00:12:52.199 "name": "BaseBdev4", 00:12:52.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.199 "is_configured": false, 00:12:52.199 "data_offset": 0, 00:12:52.199 "data_size": 0 00:12:52.199 } 00:12:52.199 ] 00:12:52.199 }' 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.199 22:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 [2024-09-27 22:29:48.316782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.458 BaseBdev3 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.458 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.459 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.459 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.459 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.718 [ 00:12:52.718 { 00:12:52.718 "name": "BaseBdev3", 00:12:52.718 "aliases": [ 00:12:52.718 "dafddaee-f3a5-4880-9746-2258a17ff605" 00:12:52.718 ], 00:12:52.718 "product_name": "Malloc disk", 00:12:52.718 "block_size": 512, 00:12:52.718 "num_blocks": 65536, 00:12:52.718 "uuid": "dafddaee-f3a5-4880-9746-2258a17ff605", 00:12:52.718 "assigned_rate_limits": { 00:12:52.718 "rw_ios_per_sec": 0, 00:12:52.718 "rw_mbytes_per_sec": 0, 00:12:52.718 "r_mbytes_per_sec": 0, 00:12:52.718 "w_mbytes_per_sec": 0 00:12:52.718 }, 00:12:52.718 "claimed": true, 00:12:52.718 "claim_type": "exclusive_write", 00:12:52.718 "zoned": false, 00:12:52.718 "supported_io_types": { 00:12:52.718 "read": true, 00:12:52.718 "write": true, 00:12:52.718 "unmap": true, 00:12:52.718 "flush": true, 00:12:52.718 "reset": true, 00:12:52.718 "nvme_admin": false, 00:12:52.718 "nvme_io": false, 00:12:52.718 "nvme_io_md": false, 00:12:52.718 "write_zeroes": true, 00:12:52.718 "zcopy": true, 00:12:52.718 "get_zone_info": false, 00:12:52.718 "zone_management": false, 00:12:52.718 "zone_append": false, 00:12:52.718 "compare": false, 00:12:52.718 "compare_and_write": false, 00:12:52.718 "abort": true, 00:12:52.718 "seek_hole": false, 00:12:52.718 "seek_data": false, 00:12:52.718 "copy": true, 00:12:52.718 "nvme_iov_md": false 00:12:52.718 }, 00:12:52.718 "memory_domains": [ 00:12:52.718 { 00:12:52.718 "dma_device_id": "system", 00:12:52.718 "dma_device_type": 1 00:12:52.718 }, 00:12:52.718 { 00:12:52.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.718 "dma_device_type": 2 00:12:52.718 } 00:12:52.718 ], 00:12:52.718 "driver_specific": {} 00:12:52.718 } 00:12:52.718 ] 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.718 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.718 "name": "Existed_Raid", 00:12:52.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.718 "strip_size_kb": 64, 00:12:52.718 "state": "configuring", 00:12:52.718 "raid_level": "raid0", 00:12:52.718 "superblock": false, 00:12:52.718 "num_base_bdevs": 4, 00:12:52.718 "num_base_bdevs_discovered": 3, 00:12:52.718 "num_base_bdevs_operational": 4, 00:12:52.718 "base_bdevs_list": [ 00:12:52.718 { 00:12:52.718 "name": "BaseBdev1", 00:12:52.718 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:52.718 "is_configured": true, 00:12:52.718 "data_offset": 0, 00:12:52.718 "data_size": 65536 00:12:52.718 }, 00:12:52.718 { 00:12:52.718 "name": "BaseBdev2", 00:12:52.718 "uuid": "bc2b79cd-2e42-4298-a529-0cbb9056886e", 00:12:52.718 "is_configured": true, 00:12:52.718 "data_offset": 0, 00:12:52.718 "data_size": 65536 00:12:52.718 }, 00:12:52.718 { 00:12:52.718 "name": "BaseBdev3", 00:12:52.718 "uuid": "dafddaee-f3a5-4880-9746-2258a17ff605", 00:12:52.718 "is_configured": true, 00:12:52.718 "data_offset": 0, 00:12:52.718 "data_size": 65536 00:12:52.718 }, 00:12:52.718 { 00:12:52.718 "name": "BaseBdev4", 00:12:52.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.718 "is_configured": false, 00:12:52.718 "data_offset": 0, 00:12:52.718 "data_size": 0 00:12:52.718 } 00:12:52.719 ] 00:12:52.719 }' 00:12:52.719 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.719 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.978 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:52.978 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.978 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.249 [2024-09-27 22:29:48.864190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:53.249 [2024-09-27 22:29:48.864477] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:53.249 [2024-09-27 22:29:48.864500] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:53.249 [2024-09-27 22:29:48.864832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:53.249 [2024-09-27 22:29:48.865058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:53.249 [2024-09-27 22:29:48.865081] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:53.249 [2024-09-27 22:29:48.865383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.249 BaseBdev4 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.249 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.250 [ 00:12:53.250 { 00:12:53.250 "name": "BaseBdev4", 00:12:53.250 "aliases": [ 00:12:53.250 "063b2efd-c1b9-4f29-baeb-5a5e21b4aa52" 00:12:53.250 ], 00:12:53.250 "product_name": "Malloc disk", 00:12:53.250 "block_size": 512, 00:12:53.250 "num_blocks": 65536, 00:12:53.250 "uuid": "063b2efd-c1b9-4f29-baeb-5a5e21b4aa52", 00:12:53.250 "assigned_rate_limits": { 00:12:53.250 "rw_ios_per_sec": 0, 00:12:53.250 "rw_mbytes_per_sec": 0, 00:12:53.250 "r_mbytes_per_sec": 0, 00:12:53.250 "w_mbytes_per_sec": 0 00:12:53.250 }, 00:12:53.250 "claimed": true, 00:12:53.250 "claim_type": "exclusive_write", 00:12:53.250 "zoned": false, 00:12:53.250 "supported_io_types": { 00:12:53.250 "read": true, 00:12:53.250 "write": true, 00:12:53.250 "unmap": true, 00:12:53.250 "flush": true, 00:12:53.250 "reset": true, 00:12:53.250 "nvme_admin": false, 00:12:53.250 "nvme_io": false, 00:12:53.250 "nvme_io_md": false, 00:12:53.250 "write_zeroes": true, 00:12:53.250 "zcopy": true, 00:12:53.250 "get_zone_info": false, 00:12:53.250 "zone_management": false, 00:12:53.250 "zone_append": false, 00:12:53.250 "compare": false, 00:12:53.250 "compare_and_write": false, 00:12:53.250 "abort": true, 00:12:53.250 "seek_hole": false, 00:12:53.250 "seek_data": false, 00:12:53.250 "copy": true, 00:12:53.250 "nvme_iov_md": false 00:12:53.250 }, 00:12:53.250 "memory_domains": [ 00:12:53.250 { 00:12:53.250 "dma_device_id": "system", 00:12:53.250 "dma_device_type": 1 00:12:53.250 }, 00:12:53.250 { 00:12:53.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.250 "dma_device_type": 2 00:12:53.250 } 00:12:53.250 ], 00:12:53.250 "driver_specific": {} 00:12:53.250 } 00:12:53.250 ] 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.250 "name": "Existed_Raid", 00:12:53.250 "uuid": "568a1f1d-2674-4385-8d67-73525447ca5a", 00:12:53.250 "strip_size_kb": 64, 00:12:53.250 "state": "online", 00:12:53.250 "raid_level": "raid0", 00:12:53.250 "superblock": false, 00:12:53.250 "num_base_bdevs": 4, 00:12:53.250 "num_base_bdevs_discovered": 4, 00:12:53.250 "num_base_bdevs_operational": 4, 00:12:53.250 "base_bdevs_list": [ 00:12:53.250 { 00:12:53.250 "name": "BaseBdev1", 00:12:53.250 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:53.250 "is_configured": true, 00:12:53.250 "data_offset": 0, 00:12:53.250 "data_size": 65536 00:12:53.250 }, 00:12:53.250 { 00:12:53.250 "name": "BaseBdev2", 00:12:53.250 "uuid": "bc2b79cd-2e42-4298-a529-0cbb9056886e", 00:12:53.250 "is_configured": true, 00:12:53.250 "data_offset": 0, 00:12:53.250 "data_size": 65536 00:12:53.250 }, 00:12:53.250 { 00:12:53.250 "name": "BaseBdev3", 00:12:53.250 "uuid": "dafddaee-f3a5-4880-9746-2258a17ff605", 00:12:53.250 "is_configured": true, 00:12:53.250 "data_offset": 0, 00:12:53.250 "data_size": 65536 00:12:53.250 }, 00:12:53.250 { 00:12:53.250 "name": "BaseBdev4", 00:12:53.250 "uuid": "063b2efd-c1b9-4f29-baeb-5a5e21b4aa52", 00:12:53.250 "is_configured": true, 00:12:53.250 "data_offset": 0, 00:12:53.250 "data_size": 65536 00:12:53.250 } 00:12:53.250 ] 00:12:53.250 }' 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.250 22:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.508 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.508 [2024-09-27 22:29:49.379914] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.768 "name": "Existed_Raid", 00:12:53.768 "aliases": [ 00:12:53.768 "568a1f1d-2674-4385-8d67-73525447ca5a" 00:12:53.768 ], 00:12:53.768 "product_name": "Raid Volume", 00:12:53.768 "block_size": 512, 00:12:53.768 "num_blocks": 262144, 00:12:53.768 "uuid": "568a1f1d-2674-4385-8d67-73525447ca5a", 00:12:53.768 "assigned_rate_limits": { 00:12:53.768 "rw_ios_per_sec": 0, 00:12:53.768 "rw_mbytes_per_sec": 0, 00:12:53.768 "r_mbytes_per_sec": 0, 00:12:53.768 "w_mbytes_per_sec": 0 00:12:53.768 }, 00:12:53.768 "claimed": false, 00:12:53.768 "zoned": false, 00:12:53.768 "supported_io_types": { 00:12:53.768 "read": true, 00:12:53.768 "write": true, 00:12:53.768 "unmap": true, 00:12:53.768 "flush": true, 00:12:53.768 "reset": true, 00:12:53.768 "nvme_admin": false, 00:12:53.768 "nvme_io": false, 00:12:53.768 "nvme_io_md": false, 00:12:53.768 "write_zeroes": true, 00:12:53.768 "zcopy": false, 00:12:53.768 "get_zone_info": false, 00:12:53.768 "zone_management": false, 00:12:53.768 "zone_append": false, 00:12:53.768 "compare": false, 00:12:53.768 "compare_and_write": false, 00:12:53.768 "abort": false, 00:12:53.768 "seek_hole": false, 00:12:53.768 "seek_data": false, 00:12:53.768 "copy": false, 00:12:53.768 "nvme_iov_md": false 00:12:53.768 }, 00:12:53.768 "memory_domains": [ 00:12:53.768 { 00:12:53.768 "dma_device_id": "system", 00:12:53.768 "dma_device_type": 1 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.768 "dma_device_type": 2 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "system", 00:12:53.768 "dma_device_type": 1 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.768 "dma_device_type": 2 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "system", 00:12:53.768 "dma_device_type": 1 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.768 "dma_device_type": 2 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "system", 00:12:53.768 "dma_device_type": 1 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.768 "dma_device_type": 2 00:12:53.768 } 00:12:53.768 ], 00:12:53.768 "driver_specific": { 00:12:53.768 "raid": { 00:12:53.768 "uuid": "568a1f1d-2674-4385-8d67-73525447ca5a", 00:12:53.768 "strip_size_kb": 64, 00:12:53.768 "state": "online", 00:12:53.768 "raid_level": "raid0", 00:12:53.768 "superblock": false, 00:12:53.768 "num_base_bdevs": 4, 00:12:53.768 "num_base_bdevs_discovered": 4, 00:12:53.768 "num_base_bdevs_operational": 4, 00:12:53.768 "base_bdevs_list": [ 00:12:53.768 { 00:12:53.768 "name": "BaseBdev1", 00:12:53.768 "uuid": "da660b01-40e3-4a5d-ada0-d33a418982eb", 00:12:53.768 "is_configured": true, 00:12:53.768 "data_offset": 0, 00:12:53.768 "data_size": 65536 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "name": "BaseBdev2", 00:12:53.768 "uuid": "bc2b79cd-2e42-4298-a529-0cbb9056886e", 00:12:53.768 "is_configured": true, 00:12:53.768 "data_offset": 0, 00:12:53.768 "data_size": 65536 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "name": "BaseBdev3", 00:12:53.768 "uuid": "dafddaee-f3a5-4880-9746-2258a17ff605", 00:12:53.768 "is_configured": true, 00:12:53.768 "data_offset": 0, 00:12:53.768 "data_size": 65536 00:12:53.768 }, 00:12:53.768 { 00:12:53.768 "name": "BaseBdev4", 00:12:53.768 "uuid": "063b2efd-c1b9-4f29-baeb-5a5e21b4aa52", 00:12:53.768 "is_configured": true, 00:12:53.768 "data_offset": 0, 00:12:53.768 "data_size": 65536 00:12:53.768 } 00:12:53.768 ] 00:12:53.768 } 00:12:53.768 } 00:12:53.768 }' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:53.768 BaseBdev2 00:12:53.768 BaseBdev3 00:12:53.768 BaseBdev4' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.768 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.769 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.769 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.769 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:53.769 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.769 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.028 [2024-09-27 22:29:49.675498] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.028 [2024-09-27 22:29:49.675538] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.028 [2024-09-27 22:29:49.675596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:54.028 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.029 "name": "Existed_Raid", 00:12:54.029 "uuid": "568a1f1d-2674-4385-8d67-73525447ca5a", 00:12:54.029 "strip_size_kb": 64, 00:12:54.029 "state": "offline", 00:12:54.029 "raid_level": "raid0", 00:12:54.029 "superblock": false, 00:12:54.029 "num_base_bdevs": 4, 00:12:54.029 "num_base_bdevs_discovered": 3, 00:12:54.029 "num_base_bdevs_operational": 3, 00:12:54.029 "base_bdevs_list": [ 00:12:54.029 { 00:12:54.029 "name": null, 00:12:54.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.029 "is_configured": false, 00:12:54.029 "data_offset": 0, 00:12:54.029 "data_size": 65536 00:12:54.029 }, 00:12:54.029 { 00:12:54.029 "name": "BaseBdev2", 00:12:54.029 "uuid": "bc2b79cd-2e42-4298-a529-0cbb9056886e", 00:12:54.029 "is_configured": true, 00:12:54.029 "data_offset": 0, 00:12:54.029 "data_size": 65536 00:12:54.029 }, 00:12:54.029 { 00:12:54.029 "name": "BaseBdev3", 00:12:54.029 "uuid": "dafddaee-f3a5-4880-9746-2258a17ff605", 00:12:54.029 "is_configured": true, 00:12:54.029 "data_offset": 0, 00:12:54.029 "data_size": 65536 00:12:54.029 }, 00:12:54.029 { 00:12:54.029 "name": "BaseBdev4", 00:12:54.029 "uuid": "063b2efd-c1b9-4f29-baeb-5a5e21b4aa52", 00:12:54.029 "is_configured": true, 00:12:54.029 "data_offset": 0, 00:12:54.029 "data_size": 65536 00:12:54.029 } 00:12:54.029 ] 00:12:54.029 }' 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.029 22:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 [2024-09-27 22:29:50.282102] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.597 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.597 [2024-09-27 22:29:50.446703] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.856 [2024-09-27 22:29:50.610440] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:54.856 [2024-09-27 22:29:50.610505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.856 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.116 BaseBdev2 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:55.116 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.117 [ 00:12:55.117 { 00:12:55.117 "name": "BaseBdev2", 00:12:55.117 "aliases": [ 00:12:55.117 "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c" 00:12:55.117 ], 00:12:55.117 "product_name": "Malloc disk", 00:12:55.117 "block_size": 512, 00:12:55.117 "num_blocks": 65536, 00:12:55.117 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:55.117 "assigned_rate_limits": { 00:12:55.117 "rw_ios_per_sec": 0, 00:12:55.117 "rw_mbytes_per_sec": 0, 00:12:55.117 "r_mbytes_per_sec": 0, 00:12:55.117 "w_mbytes_per_sec": 0 00:12:55.117 }, 00:12:55.117 "claimed": false, 00:12:55.117 "zoned": false, 00:12:55.117 "supported_io_types": { 00:12:55.117 "read": true, 00:12:55.117 "write": true, 00:12:55.117 "unmap": true, 00:12:55.117 "flush": true, 00:12:55.117 "reset": true, 00:12:55.117 "nvme_admin": false, 00:12:55.117 "nvme_io": false, 00:12:55.117 "nvme_io_md": false, 00:12:55.117 "write_zeroes": true, 00:12:55.117 "zcopy": true, 00:12:55.117 "get_zone_info": false, 00:12:55.117 "zone_management": false, 00:12:55.117 "zone_append": false, 00:12:55.117 "compare": false, 00:12:55.117 "compare_and_write": false, 00:12:55.117 "abort": true, 00:12:55.117 "seek_hole": false, 00:12:55.117 "seek_data": false, 00:12:55.117 "copy": true, 00:12:55.117 "nvme_iov_md": false 00:12:55.117 }, 00:12:55.117 "memory_domains": [ 00:12:55.117 { 00:12:55.117 "dma_device_id": "system", 00:12:55.117 "dma_device_type": 1 00:12:55.117 }, 00:12:55.117 { 00:12:55.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.117 "dma_device_type": 2 00:12:55.117 } 00:12:55.117 ], 00:12:55.117 "driver_specific": {} 00:12:55.117 } 00:12:55.117 ] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.117 BaseBdev3 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.117 [ 00:12:55.117 { 00:12:55.117 "name": "BaseBdev3", 00:12:55.117 "aliases": [ 00:12:55.117 "282ef9da-d7be-48e1-a2ec-22bb87d8b162" 00:12:55.117 ], 00:12:55.117 "product_name": "Malloc disk", 00:12:55.117 "block_size": 512, 00:12:55.117 "num_blocks": 65536, 00:12:55.117 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:55.117 "assigned_rate_limits": { 00:12:55.117 "rw_ios_per_sec": 0, 00:12:55.117 "rw_mbytes_per_sec": 0, 00:12:55.117 "r_mbytes_per_sec": 0, 00:12:55.117 "w_mbytes_per_sec": 0 00:12:55.117 }, 00:12:55.117 "claimed": false, 00:12:55.117 "zoned": false, 00:12:55.117 "supported_io_types": { 00:12:55.117 "read": true, 00:12:55.117 "write": true, 00:12:55.117 "unmap": true, 00:12:55.117 "flush": true, 00:12:55.117 "reset": true, 00:12:55.117 "nvme_admin": false, 00:12:55.117 "nvme_io": false, 00:12:55.117 "nvme_io_md": false, 00:12:55.117 "write_zeroes": true, 00:12:55.117 "zcopy": true, 00:12:55.117 "get_zone_info": false, 00:12:55.117 "zone_management": false, 00:12:55.117 "zone_append": false, 00:12:55.117 "compare": false, 00:12:55.117 "compare_and_write": false, 00:12:55.117 "abort": true, 00:12:55.117 "seek_hole": false, 00:12:55.117 "seek_data": false, 00:12:55.117 "copy": true, 00:12:55.117 "nvme_iov_md": false 00:12:55.117 }, 00:12:55.117 "memory_domains": [ 00:12:55.117 { 00:12:55.117 "dma_device_id": "system", 00:12:55.117 "dma_device_type": 1 00:12:55.117 }, 00:12:55.117 { 00:12:55.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.117 "dma_device_type": 2 00:12:55.117 } 00:12:55.117 ], 00:12:55.117 "driver_specific": {} 00:12:55.117 } 00:12:55.117 ] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.117 22:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.376 BaseBdev4 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.376 [ 00:12:55.376 { 00:12:55.376 "name": "BaseBdev4", 00:12:55.376 "aliases": [ 00:12:55.376 "ffa74db6-d888-494a-b81b-57bc0701e099" 00:12:55.376 ], 00:12:55.376 "product_name": "Malloc disk", 00:12:55.376 "block_size": 512, 00:12:55.376 "num_blocks": 65536, 00:12:55.376 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:55.376 "assigned_rate_limits": { 00:12:55.376 "rw_ios_per_sec": 0, 00:12:55.376 "rw_mbytes_per_sec": 0, 00:12:55.376 "r_mbytes_per_sec": 0, 00:12:55.376 "w_mbytes_per_sec": 0 00:12:55.376 }, 00:12:55.376 "claimed": false, 00:12:55.376 "zoned": false, 00:12:55.376 "supported_io_types": { 00:12:55.376 "read": true, 00:12:55.376 "write": true, 00:12:55.376 "unmap": true, 00:12:55.376 "flush": true, 00:12:55.376 "reset": true, 00:12:55.376 "nvme_admin": false, 00:12:55.376 "nvme_io": false, 00:12:55.376 "nvme_io_md": false, 00:12:55.376 "write_zeroes": true, 00:12:55.376 "zcopy": true, 00:12:55.376 "get_zone_info": false, 00:12:55.376 "zone_management": false, 00:12:55.376 "zone_append": false, 00:12:55.376 "compare": false, 00:12:55.376 "compare_and_write": false, 00:12:55.376 "abort": true, 00:12:55.376 "seek_hole": false, 00:12:55.376 "seek_data": false, 00:12:55.376 "copy": true, 00:12:55.376 "nvme_iov_md": false 00:12:55.376 }, 00:12:55.376 "memory_domains": [ 00:12:55.376 { 00:12:55.376 "dma_device_id": "system", 00:12:55.376 "dma_device_type": 1 00:12:55.376 }, 00:12:55.376 { 00:12:55.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.376 "dma_device_type": 2 00:12:55.376 } 00:12:55.376 ], 00:12:55.376 "driver_specific": {} 00:12:55.376 } 00:12:55.376 ] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.376 [2024-09-27 22:29:51.076570] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.376 [2024-09-27 22:29:51.076763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.376 [2024-09-27 22:29:51.076876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.376 [2024-09-27 22:29:51.079247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.376 [2024-09-27 22:29:51.079458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.376 "name": "Existed_Raid", 00:12:55.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.376 "strip_size_kb": 64, 00:12:55.376 "state": "configuring", 00:12:55.376 "raid_level": "raid0", 00:12:55.376 "superblock": false, 00:12:55.376 "num_base_bdevs": 4, 00:12:55.376 "num_base_bdevs_discovered": 3, 00:12:55.376 "num_base_bdevs_operational": 4, 00:12:55.376 "base_bdevs_list": [ 00:12:55.376 { 00:12:55.376 "name": "BaseBdev1", 00:12:55.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.376 "is_configured": false, 00:12:55.376 "data_offset": 0, 00:12:55.376 "data_size": 0 00:12:55.376 }, 00:12:55.376 { 00:12:55.376 "name": "BaseBdev2", 00:12:55.376 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:55.376 "is_configured": true, 00:12:55.376 "data_offset": 0, 00:12:55.376 "data_size": 65536 00:12:55.376 }, 00:12:55.376 { 00:12:55.376 "name": "BaseBdev3", 00:12:55.376 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:55.376 "is_configured": true, 00:12:55.376 "data_offset": 0, 00:12:55.376 "data_size": 65536 00:12:55.376 }, 00:12:55.376 { 00:12:55.376 "name": "BaseBdev4", 00:12:55.376 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:55.376 "is_configured": true, 00:12:55.376 "data_offset": 0, 00:12:55.376 "data_size": 65536 00:12:55.376 } 00:12:55.376 ] 00:12:55.376 }' 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.376 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.942 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.943 [2024-09-27 22:29:51.568176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.943 "name": "Existed_Raid", 00:12:55.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.943 "strip_size_kb": 64, 00:12:55.943 "state": "configuring", 00:12:55.943 "raid_level": "raid0", 00:12:55.943 "superblock": false, 00:12:55.943 "num_base_bdevs": 4, 00:12:55.943 "num_base_bdevs_discovered": 2, 00:12:55.943 "num_base_bdevs_operational": 4, 00:12:55.943 "base_bdevs_list": [ 00:12:55.943 { 00:12:55.943 "name": "BaseBdev1", 00:12:55.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.943 "is_configured": false, 00:12:55.943 "data_offset": 0, 00:12:55.943 "data_size": 0 00:12:55.943 }, 00:12:55.943 { 00:12:55.943 "name": null, 00:12:55.943 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:55.943 "is_configured": false, 00:12:55.943 "data_offset": 0, 00:12:55.943 "data_size": 65536 00:12:55.943 }, 00:12:55.943 { 00:12:55.943 "name": "BaseBdev3", 00:12:55.943 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:55.943 "is_configured": true, 00:12:55.943 "data_offset": 0, 00:12:55.943 "data_size": 65536 00:12:55.943 }, 00:12:55.943 { 00:12:55.943 "name": "BaseBdev4", 00:12:55.943 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:55.943 "is_configured": true, 00:12:55.943 "data_offset": 0, 00:12:55.943 "data_size": 65536 00:12:55.943 } 00:12:55.943 ] 00:12:55.943 }' 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.943 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.202 22:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.202 [2024-09-27 22:29:52.043785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.202 BaseBdev1 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.202 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.202 [ 00:12:56.202 { 00:12:56.202 "name": "BaseBdev1", 00:12:56.202 "aliases": [ 00:12:56.202 "be5e9ccb-f450-4b5b-9f56-002f3183d018" 00:12:56.202 ], 00:12:56.202 "product_name": "Malloc disk", 00:12:56.202 "block_size": 512, 00:12:56.202 "num_blocks": 65536, 00:12:56.202 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:56.202 "assigned_rate_limits": { 00:12:56.202 "rw_ios_per_sec": 0, 00:12:56.202 "rw_mbytes_per_sec": 0, 00:12:56.202 "r_mbytes_per_sec": 0, 00:12:56.202 "w_mbytes_per_sec": 0 00:12:56.202 }, 00:12:56.462 "claimed": true, 00:12:56.462 "claim_type": "exclusive_write", 00:12:56.462 "zoned": false, 00:12:56.462 "supported_io_types": { 00:12:56.462 "read": true, 00:12:56.462 "write": true, 00:12:56.462 "unmap": true, 00:12:56.462 "flush": true, 00:12:56.462 "reset": true, 00:12:56.462 "nvme_admin": false, 00:12:56.462 "nvme_io": false, 00:12:56.462 "nvme_io_md": false, 00:12:56.462 "write_zeroes": true, 00:12:56.462 "zcopy": true, 00:12:56.462 "get_zone_info": false, 00:12:56.462 "zone_management": false, 00:12:56.462 "zone_append": false, 00:12:56.462 "compare": false, 00:12:56.462 "compare_and_write": false, 00:12:56.462 "abort": true, 00:12:56.462 "seek_hole": false, 00:12:56.462 "seek_data": false, 00:12:56.462 "copy": true, 00:12:56.462 "nvme_iov_md": false 00:12:56.462 }, 00:12:56.462 "memory_domains": [ 00:12:56.462 { 00:12:56.462 "dma_device_id": "system", 00:12:56.462 "dma_device_type": 1 00:12:56.462 }, 00:12:56.462 { 00:12:56.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.462 "dma_device_type": 2 00:12:56.462 } 00:12:56.462 ], 00:12:56.462 "driver_specific": {} 00:12:56.462 } 00:12:56.462 ] 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.462 "name": "Existed_Raid", 00:12:56.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.462 "strip_size_kb": 64, 00:12:56.462 "state": "configuring", 00:12:56.462 "raid_level": "raid0", 00:12:56.462 "superblock": false, 00:12:56.462 "num_base_bdevs": 4, 00:12:56.462 "num_base_bdevs_discovered": 3, 00:12:56.462 "num_base_bdevs_operational": 4, 00:12:56.462 "base_bdevs_list": [ 00:12:56.462 { 00:12:56.462 "name": "BaseBdev1", 00:12:56.462 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:56.462 "is_configured": true, 00:12:56.462 "data_offset": 0, 00:12:56.462 "data_size": 65536 00:12:56.462 }, 00:12:56.462 { 00:12:56.462 "name": null, 00:12:56.462 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:56.462 "is_configured": false, 00:12:56.462 "data_offset": 0, 00:12:56.462 "data_size": 65536 00:12:56.462 }, 00:12:56.462 { 00:12:56.462 "name": "BaseBdev3", 00:12:56.462 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:56.462 "is_configured": true, 00:12:56.462 "data_offset": 0, 00:12:56.462 "data_size": 65536 00:12:56.462 }, 00:12:56.462 { 00:12:56.462 "name": "BaseBdev4", 00:12:56.462 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:56.462 "is_configured": true, 00:12:56.462 "data_offset": 0, 00:12:56.462 "data_size": 65536 00:12:56.462 } 00:12:56.462 ] 00:12:56.462 }' 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.462 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.722 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.722 [2024-09-27 22:29:52.595470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.981 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.981 "name": "Existed_Raid", 00:12:56.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.981 "strip_size_kb": 64, 00:12:56.982 "state": "configuring", 00:12:56.982 "raid_level": "raid0", 00:12:56.982 "superblock": false, 00:12:56.982 "num_base_bdevs": 4, 00:12:56.982 "num_base_bdevs_discovered": 2, 00:12:56.982 "num_base_bdevs_operational": 4, 00:12:56.982 "base_bdevs_list": [ 00:12:56.982 { 00:12:56.982 "name": "BaseBdev1", 00:12:56.982 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:56.982 "is_configured": true, 00:12:56.982 "data_offset": 0, 00:12:56.982 "data_size": 65536 00:12:56.982 }, 00:12:56.982 { 00:12:56.982 "name": null, 00:12:56.982 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:56.982 "is_configured": false, 00:12:56.982 "data_offset": 0, 00:12:56.982 "data_size": 65536 00:12:56.982 }, 00:12:56.982 { 00:12:56.982 "name": null, 00:12:56.982 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:56.982 "is_configured": false, 00:12:56.982 "data_offset": 0, 00:12:56.982 "data_size": 65536 00:12:56.982 }, 00:12:56.982 { 00:12:56.982 "name": "BaseBdev4", 00:12:56.982 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:56.982 "is_configured": true, 00:12:56.982 "data_offset": 0, 00:12:56.982 "data_size": 65536 00:12:56.982 } 00:12:56.982 ] 00:12:56.982 }' 00:12:56.982 22:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.982 22:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.240 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.500 [2024-09-27 22:29:53.119498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.500 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.500 "name": "Existed_Raid", 00:12:57.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.500 "strip_size_kb": 64, 00:12:57.500 "state": "configuring", 00:12:57.500 "raid_level": "raid0", 00:12:57.500 "superblock": false, 00:12:57.501 "num_base_bdevs": 4, 00:12:57.501 "num_base_bdevs_discovered": 3, 00:12:57.501 "num_base_bdevs_operational": 4, 00:12:57.501 "base_bdevs_list": [ 00:12:57.501 { 00:12:57.501 "name": "BaseBdev1", 00:12:57.501 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:57.501 "is_configured": true, 00:12:57.501 "data_offset": 0, 00:12:57.501 "data_size": 65536 00:12:57.501 }, 00:12:57.501 { 00:12:57.501 "name": null, 00:12:57.501 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:57.501 "is_configured": false, 00:12:57.501 "data_offset": 0, 00:12:57.501 "data_size": 65536 00:12:57.501 }, 00:12:57.501 { 00:12:57.501 "name": "BaseBdev3", 00:12:57.501 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:57.501 "is_configured": true, 00:12:57.501 "data_offset": 0, 00:12:57.501 "data_size": 65536 00:12:57.501 }, 00:12:57.501 { 00:12:57.501 "name": "BaseBdev4", 00:12:57.501 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:57.501 "is_configured": true, 00:12:57.501 "data_offset": 0, 00:12:57.501 "data_size": 65536 00:12:57.501 } 00:12:57.501 ] 00:12:57.501 }' 00:12:57.501 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.501 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:57.759 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.760 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.760 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.760 [2024-09-27 22:29:53.627528] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.019 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.019 "name": "Existed_Raid", 00:12:58.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.019 "strip_size_kb": 64, 00:12:58.019 "state": "configuring", 00:12:58.019 "raid_level": "raid0", 00:12:58.019 "superblock": false, 00:12:58.019 "num_base_bdevs": 4, 00:12:58.019 "num_base_bdevs_discovered": 2, 00:12:58.019 "num_base_bdevs_operational": 4, 00:12:58.019 "base_bdevs_list": [ 00:12:58.019 { 00:12:58.020 "name": null, 00:12:58.020 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:58.020 "is_configured": false, 00:12:58.020 "data_offset": 0, 00:12:58.020 "data_size": 65536 00:12:58.020 }, 00:12:58.020 { 00:12:58.020 "name": null, 00:12:58.020 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:58.020 "is_configured": false, 00:12:58.020 "data_offset": 0, 00:12:58.020 "data_size": 65536 00:12:58.020 }, 00:12:58.020 { 00:12:58.020 "name": "BaseBdev3", 00:12:58.020 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:58.020 "is_configured": true, 00:12:58.020 "data_offset": 0, 00:12:58.020 "data_size": 65536 00:12:58.020 }, 00:12:58.020 { 00:12:58.020 "name": "BaseBdev4", 00:12:58.020 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:58.020 "is_configured": true, 00:12:58.020 "data_offset": 0, 00:12:58.020 "data_size": 65536 00:12:58.020 } 00:12:58.020 ] 00:12:58.020 }' 00:12:58.020 22:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.020 22:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 [2024-09-27 22:29:54.285363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.588 "name": "Existed_Raid", 00:12:58.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.588 "strip_size_kb": 64, 00:12:58.588 "state": "configuring", 00:12:58.588 "raid_level": "raid0", 00:12:58.588 "superblock": false, 00:12:58.588 "num_base_bdevs": 4, 00:12:58.588 "num_base_bdevs_discovered": 3, 00:12:58.588 "num_base_bdevs_operational": 4, 00:12:58.588 "base_bdevs_list": [ 00:12:58.588 { 00:12:58.588 "name": null, 00:12:58.588 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:58.588 "is_configured": false, 00:12:58.588 "data_offset": 0, 00:12:58.588 "data_size": 65536 00:12:58.588 }, 00:12:58.588 { 00:12:58.588 "name": "BaseBdev2", 00:12:58.588 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:58.588 "is_configured": true, 00:12:58.588 "data_offset": 0, 00:12:58.588 "data_size": 65536 00:12:58.588 }, 00:12:58.588 { 00:12:58.588 "name": "BaseBdev3", 00:12:58.588 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:58.588 "is_configured": true, 00:12:58.588 "data_offset": 0, 00:12:58.588 "data_size": 65536 00:12:58.588 }, 00:12:58.588 { 00:12:58.588 "name": "BaseBdev4", 00:12:58.588 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:58.588 "is_configured": true, 00:12:58.588 "data_offset": 0, 00:12:58.588 "data_size": 65536 00:12:58.588 } 00:12:58.588 ] 00:12:58.588 }' 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.588 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be5e9ccb-f450-4b5b-9f56-002f3183d018 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.156 [2024-09-27 22:29:54.890418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:59.156 [2024-09-27 22:29:54.890479] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:59.156 [2024-09-27 22:29:54.890489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:59.156 [2024-09-27 22:29:54.890780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:59.156 [2024-09-27 22:29:54.890919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:59.156 [2024-09-27 22:29:54.890932] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:59.156 [2024-09-27 22:29:54.891250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.156 NewBaseBdev 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.156 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.156 [ 00:12:59.156 { 00:12:59.156 "name": "NewBaseBdev", 00:12:59.156 "aliases": [ 00:12:59.157 "be5e9ccb-f450-4b5b-9f56-002f3183d018" 00:12:59.157 ], 00:12:59.157 "product_name": "Malloc disk", 00:12:59.157 "block_size": 512, 00:12:59.157 "num_blocks": 65536, 00:12:59.157 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:59.157 "assigned_rate_limits": { 00:12:59.157 "rw_ios_per_sec": 0, 00:12:59.157 "rw_mbytes_per_sec": 0, 00:12:59.157 "r_mbytes_per_sec": 0, 00:12:59.157 "w_mbytes_per_sec": 0 00:12:59.157 }, 00:12:59.157 "claimed": true, 00:12:59.157 "claim_type": "exclusive_write", 00:12:59.157 "zoned": false, 00:12:59.157 "supported_io_types": { 00:12:59.157 "read": true, 00:12:59.157 "write": true, 00:12:59.157 "unmap": true, 00:12:59.157 "flush": true, 00:12:59.157 "reset": true, 00:12:59.157 "nvme_admin": false, 00:12:59.157 "nvme_io": false, 00:12:59.157 "nvme_io_md": false, 00:12:59.157 "write_zeroes": true, 00:12:59.157 "zcopy": true, 00:12:59.157 "get_zone_info": false, 00:12:59.157 "zone_management": false, 00:12:59.157 "zone_append": false, 00:12:59.157 "compare": false, 00:12:59.157 "compare_and_write": false, 00:12:59.157 "abort": true, 00:12:59.157 "seek_hole": false, 00:12:59.157 "seek_data": false, 00:12:59.157 "copy": true, 00:12:59.157 "nvme_iov_md": false 00:12:59.157 }, 00:12:59.157 "memory_domains": [ 00:12:59.157 { 00:12:59.157 "dma_device_id": "system", 00:12:59.157 "dma_device_type": 1 00:12:59.157 }, 00:12:59.157 { 00:12:59.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.157 "dma_device_type": 2 00:12:59.157 } 00:12:59.157 ], 00:12:59.157 "driver_specific": {} 00:12:59.157 } 00:12:59.157 ] 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.157 "name": "Existed_Raid", 00:12:59.157 "uuid": "ed07adad-d34d-4c67-b781-9cf37b47a29e", 00:12:59.157 "strip_size_kb": 64, 00:12:59.157 "state": "online", 00:12:59.157 "raid_level": "raid0", 00:12:59.157 "superblock": false, 00:12:59.157 "num_base_bdevs": 4, 00:12:59.157 "num_base_bdevs_discovered": 4, 00:12:59.157 "num_base_bdevs_operational": 4, 00:12:59.157 "base_bdevs_list": [ 00:12:59.157 { 00:12:59.157 "name": "NewBaseBdev", 00:12:59.157 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:59.157 "is_configured": true, 00:12:59.157 "data_offset": 0, 00:12:59.157 "data_size": 65536 00:12:59.157 }, 00:12:59.157 { 00:12:59.157 "name": "BaseBdev2", 00:12:59.157 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:59.157 "is_configured": true, 00:12:59.157 "data_offset": 0, 00:12:59.157 "data_size": 65536 00:12:59.157 }, 00:12:59.157 { 00:12:59.157 "name": "BaseBdev3", 00:12:59.157 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:59.157 "is_configured": true, 00:12:59.157 "data_offset": 0, 00:12:59.157 "data_size": 65536 00:12:59.157 }, 00:12:59.157 { 00:12:59.157 "name": "BaseBdev4", 00:12:59.157 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:59.157 "is_configured": true, 00:12:59.157 "data_offset": 0, 00:12:59.157 "data_size": 65536 00:12:59.157 } 00:12:59.157 ] 00:12:59.157 }' 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.157 22:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.725 [2024-09-27 22:29:55.406153] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.725 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.725 "name": "Existed_Raid", 00:12:59.725 "aliases": [ 00:12:59.725 "ed07adad-d34d-4c67-b781-9cf37b47a29e" 00:12:59.725 ], 00:12:59.725 "product_name": "Raid Volume", 00:12:59.725 "block_size": 512, 00:12:59.725 "num_blocks": 262144, 00:12:59.725 "uuid": "ed07adad-d34d-4c67-b781-9cf37b47a29e", 00:12:59.725 "assigned_rate_limits": { 00:12:59.725 "rw_ios_per_sec": 0, 00:12:59.725 "rw_mbytes_per_sec": 0, 00:12:59.725 "r_mbytes_per_sec": 0, 00:12:59.725 "w_mbytes_per_sec": 0 00:12:59.725 }, 00:12:59.725 "claimed": false, 00:12:59.725 "zoned": false, 00:12:59.725 "supported_io_types": { 00:12:59.725 "read": true, 00:12:59.725 "write": true, 00:12:59.725 "unmap": true, 00:12:59.725 "flush": true, 00:12:59.725 "reset": true, 00:12:59.725 "nvme_admin": false, 00:12:59.725 "nvme_io": false, 00:12:59.725 "nvme_io_md": false, 00:12:59.725 "write_zeroes": true, 00:12:59.725 "zcopy": false, 00:12:59.725 "get_zone_info": false, 00:12:59.725 "zone_management": false, 00:12:59.725 "zone_append": false, 00:12:59.725 "compare": false, 00:12:59.725 "compare_and_write": false, 00:12:59.725 "abort": false, 00:12:59.725 "seek_hole": false, 00:12:59.725 "seek_data": false, 00:12:59.725 "copy": false, 00:12:59.725 "nvme_iov_md": false 00:12:59.725 }, 00:12:59.725 "memory_domains": [ 00:12:59.725 { 00:12:59.725 "dma_device_id": "system", 00:12:59.725 "dma_device_type": 1 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.725 "dma_device_type": 2 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "system", 00:12:59.725 "dma_device_type": 1 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.725 "dma_device_type": 2 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "system", 00:12:59.725 "dma_device_type": 1 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.725 "dma_device_type": 2 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "system", 00:12:59.725 "dma_device_type": 1 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.725 "dma_device_type": 2 00:12:59.725 } 00:12:59.725 ], 00:12:59.725 "driver_specific": { 00:12:59.725 "raid": { 00:12:59.725 "uuid": "ed07adad-d34d-4c67-b781-9cf37b47a29e", 00:12:59.725 "strip_size_kb": 64, 00:12:59.725 "state": "online", 00:12:59.725 "raid_level": "raid0", 00:12:59.725 "superblock": false, 00:12:59.725 "num_base_bdevs": 4, 00:12:59.725 "num_base_bdevs_discovered": 4, 00:12:59.725 "num_base_bdevs_operational": 4, 00:12:59.725 "base_bdevs_list": [ 00:12:59.725 { 00:12:59.725 "name": "NewBaseBdev", 00:12:59.725 "uuid": "be5e9ccb-f450-4b5b-9f56-002f3183d018", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 0, 00:12:59.725 "data_size": 65536 00:12:59.725 }, 00:12:59.725 { 00:12:59.725 "name": "BaseBdev2", 00:12:59.725 "uuid": "59c1edca-4bc9-44d3-bbe1-7c4ecad2b90c", 00:12:59.725 "is_configured": true, 00:12:59.725 "data_offset": 0, 00:12:59.725 "data_size": 65536 00:12:59.725 }, 00:12:59.726 { 00:12:59.726 "name": "BaseBdev3", 00:12:59.726 "uuid": "282ef9da-d7be-48e1-a2ec-22bb87d8b162", 00:12:59.726 "is_configured": true, 00:12:59.726 "data_offset": 0, 00:12:59.726 "data_size": 65536 00:12:59.726 }, 00:12:59.726 { 00:12:59.726 "name": "BaseBdev4", 00:12:59.726 "uuid": "ffa74db6-d888-494a-b81b-57bc0701e099", 00:12:59.726 "is_configured": true, 00:12:59.726 "data_offset": 0, 00:12:59.726 "data_size": 65536 00:12:59.726 } 00:12:59.726 ] 00:12:59.726 } 00:12:59.726 } 00:12:59.726 }' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:59.726 BaseBdev2 00:12:59.726 BaseBdev3 00:12:59.726 BaseBdev4' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.726 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.985 [2024-09-27 22:29:55.721344] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.985 [2024-09-27 22:29:55.721382] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.985 [2024-09-27 22:29:55.721474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.985 [2024-09-27 22:29:55.721549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.985 [2024-09-27 22:29:55.721561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70032 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 70032 ']' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 70032 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70032 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.985 killing process with pid 70032 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70032' 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 70032 00:12:59.985 [2024-09-27 22:29:55.767369] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.985 22:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 70032 00:13:00.552 [2024-09-27 22:29:56.217212] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.085 ************************************ 00:13:03.085 END TEST raid_state_function_test 00:13:03.085 ************************************ 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:03.085 00:13:03.085 real 0m13.461s 00:13:03.085 user 0m20.521s 00:13:03.085 sys 0m2.497s 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.085 22:29:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:03.085 22:29:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:03.085 22:29:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.085 22:29:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.085 ************************************ 00:13:03.085 START TEST raid_state_function_test_sb 00:13:03.085 ************************************ 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:03.085 Process raid pid: 70720 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70720 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70720' 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70720 00:13:03.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70720 ']' 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.085 22:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.085 [2024-09-27 22:29:58.549326] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:13:03.085 [2024-09-27 22:29:58.549472] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.085 [2024-09-27 22:29:58.716817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.343 [2024-09-27 22:29:58.972463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.601 [2024-09-27 22:29:59.235127] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.601 [2024-09-27 22:29:59.235168] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.170 [2024-09-27 22:29:59.767426] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:04.170 [2024-09-27 22:29:59.767690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:04.170 [2024-09-27 22:29:59.767725] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:04.170 [2024-09-27 22:29:59.767746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:04.170 [2024-09-27 22:29:59.767761] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:04.170 [2024-09-27 22:29:59.767785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:04.170 [2024-09-27 22:29:59.767798] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:04.170 [2024-09-27 22:29:59.767820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.170 "name": "Existed_Raid", 00:13:04.170 "uuid": "1e463e11-4496-4f01-a0d2-4a921b9827a4", 00:13:04.170 "strip_size_kb": 64, 00:13:04.170 "state": "configuring", 00:13:04.170 "raid_level": "raid0", 00:13:04.170 "superblock": true, 00:13:04.170 "num_base_bdevs": 4, 00:13:04.170 "num_base_bdevs_discovered": 0, 00:13:04.170 "num_base_bdevs_operational": 4, 00:13:04.170 "base_bdevs_list": [ 00:13:04.170 { 00:13:04.170 "name": "BaseBdev1", 00:13:04.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.170 "is_configured": false, 00:13:04.170 "data_offset": 0, 00:13:04.170 "data_size": 0 00:13:04.170 }, 00:13:04.170 { 00:13:04.170 "name": "BaseBdev2", 00:13:04.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.170 "is_configured": false, 00:13:04.170 "data_offset": 0, 00:13:04.170 "data_size": 0 00:13:04.170 }, 00:13:04.170 { 00:13:04.170 "name": "BaseBdev3", 00:13:04.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.170 "is_configured": false, 00:13:04.170 "data_offset": 0, 00:13:04.170 "data_size": 0 00:13:04.170 }, 00:13:04.170 { 00:13:04.170 "name": "BaseBdev4", 00:13:04.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.170 "is_configured": false, 00:13:04.170 "data_offset": 0, 00:13:04.170 "data_size": 0 00:13:04.170 } 00:13:04.170 ] 00:13:04.170 }' 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.170 22:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 [2024-09-27 22:30:00.226730] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.429 [2024-09-27 22:30:00.226787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 [2024-09-27 22:30:00.238755] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:04.429 [2024-09-27 22:30:00.239009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:04.429 [2024-09-27 22:30:00.239106] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:04.429 [2024-09-27 22:30:00.239155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:04.429 [2024-09-27 22:30:00.239267] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:04.429 [2024-09-27 22:30:00.239324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:04.429 [2024-09-27 22:30:00.239358] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:04.429 [2024-09-27 22:30:00.239458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.429 [2024-09-27 22:30:00.296243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.429 BaseBdev1 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.429 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.687 [ 00:13:04.687 { 00:13:04.687 "name": "BaseBdev1", 00:13:04.687 "aliases": [ 00:13:04.687 "08e758ea-0ef8-4954-8f30-f4a7539f0c04" 00:13:04.687 ], 00:13:04.687 "product_name": "Malloc disk", 00:13:04.687 "block_size": 512, 00:13:04.687 "num_blocks": 65536, 00:13:04.687 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:04.687 "assigned_rate_limits": { 00:13:04.687 "rw_ios_per_sec": 0, 00:13:04.687 "rw_mbytes_per_sec": 0, 00:13:04.687 "r_mbytes_per_sec": 0, 00:13:04.687 "w_mbytes_per_sec": 0 00:13:04.687 }, 00:13:04.687 "claimed": true, 00:13:04.687 "claim_type": "exclusive_write", 00:13:04.687 "zoned": false, 00:13:04.687 "supported_io_types": { 00:13:04.687 "read": true, 00:13:04.687 "write": true, 00:13:04.687 "unmap": true, 00:13:04.687 "flush": true, 00:13:04.687 "reset": true, 00:13:04.687 "nvme_admin": false, 00:13:04.687 "nvme_io": false, 00:13:04.687 "nvme_io_md": false, 00:13:04.687 "write_zeroes": true, 00:13:04.687 "zcopy": true, 00:13:04.687 "get_zone_info": false, 00:13:04.687 "zone_management": false, 00:13:04.687 "zone_append": false, 00:13:04.687 "compare": false, 00:13:04.687 "compare_and_write": false, 00:13:04.687 "abort": true, 00:13:04.687 "seek_hole": false, 00:13:04.687 "seek_data": false, 00:13:04.687 "copy": true, 00:13:04.687 "nvme_iov_md": false 00:13:04.687 }, 00:13:04.687 "memory_domains": [ 00:13:04.687 { 00:13:04.687 "dma_device_id": "system", 00:13:04.687 "dma_device_type": 1 00:13:04.687 }, 00:13:04.687 { 00:13:04.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.687 "dma_device_type": 2 00:13:04.687 } 00:13:04.687 ], 00:13:04.687 "driver_specific": {} 00:13:04.687 } 00:13:04.687 ] 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.687 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.688 "name": "Existed_Raid", 00:13:04.688 "uuid": "a5c378f5-d919-41e5-b091-11537b1a99fd", 00:13:04.688 "strip_size_kb": 64, 00:13:04.688 "state": "configuring", 00:13:04.688 "raid_level": "raid0", 00:13:04.688 "superblock": true, 00:13:04.688 "num_base_bdevs": 4, 00:13:04.688 "num_base_bdevs_discovered": 1, 00:13:04.688 "num_base_bdevs_operational": 4, 00:13:04.688 "base_bdevs_list": [ 00:13:04.688 { 00:13:04.688 "name": "BaseBdev1", 00:13:04.688 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:04.688 "is_configured": true, 00:13:04.688 "data_offset": 2048, 00:13:04.688 "data_size": 63488 00:13:04.688 }, 00:13:04.688 { 00:13:04.688 "name": "BaseBdev2", 00:13:04.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.688 "is_configured": false, 00:13:04.688 "data_offset": 0, 00:13:04.688 "data_size": 0 00:13:04.688 }, 00:13:04.688 { 00:13:04.688 "name": "BaseBdev3", 00:13:04.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.688 "is_configured": false, 00:13:04.688 "data_offset": 0, 00:13:04.688 "data_size": 0 00:13:04.688 }, 00:13:04.688 { 00:13:04.688 "name": "BaseBdev4", 00:13:04.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.688 "is_configured": false, 00:13:04.688 "data_offset": 0, 00:13:04.688 "data_size": 0 00:13:04.688 } 00:13:04.688 ] 00:13:04.688 }' 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.688 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.946 [2024-09-27 22:30:00.756128] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:04.946 [2024-09-27 22:30:00.756194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.946 [2024-09-27 22:30:00.768202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.946 [2024-09-27 22:30:00.770631] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:04.946 [2024-09-27 22:30:00.770811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:04.946 [2024-09-27 22:30:00.770905] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:04.946 [2024-09-27 22:30:00.770955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:04.946 [2024-09-27 22:30:00.771058] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:04.946 [2024-09-27 22:30:00.771105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.946 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.205 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.205 "name": "Existed_Raid", 00:13:05.205 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:05.205 "strip_size_kb": 64, 00:13:05.205 "state": "configuring", 00:13:05.205 "raid_level": "raid0", 00:13:05.205 "superblock": true, 00:13:05.205 "num_base_bdevs": 4, 00:13:05.205 "num_base_bdevs_discovered": 1, 00:13:05.205 "num_base_bdevs_operational": 4, 00:13:05.205 "base_bdevs_list": [ 00:13:05.205 { 00:13:05.205 "name": "BaseBdev1", 00:13:05.205 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:05.205 "is_configured": true, 00:13:05.205 "data_offset": 2048, 00:13:05.205 "data_size": 63488 00:13:05.205 }, 00:13:05.205 { 00:13:05.205 "name": "BaseBdev2", 00:13:05.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.205 "is_configured": false, 00:13:05.205 "data_offset": 0, 00:13:05.205 "data_size": 0 00:13:05.205 }, 00:13:05.205 { 00:13:05.205 "name": "BaseBdev3", 00:13:05.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.205 "is_configured": false, 00:13:05.205 "data_offset": 0, 00:13:05.205 "data_size": 0 00:13:05.205 }, 00:13:05.205 { 00:13:05.205 "name": "BaseBdev4", 00:13:05.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.205 "is_configured": false, 00:13:05.205 "data_offset": 0, 00:13:05.205 "data_size": 0 00:13:05.205 } 00:13:05.205 ] 00:13:05.205 }' 00:13:05.205 22:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.205 22:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.463 [2024-09-27 22:30:01.263572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.463 BaseBdev2 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.463 [ 00:13:05.463 { 00:13:05.463 "name": "BaseBdev2", 00:13:05.463 "aliases": [ 00:13:05.463 "1cfffe6e-f5bf-44d2-a324-53d0a790440a" 00:13:05.463 ], 00:13:05.463 "product_name": "Malloc disk", 00:13:05.463 "block_size": 512, 00:13:05.463 "num_blocks": 65536, 00:13:05.463 "uuid": "1cfffe6e-f5bf-44d2-a324-53d0a790440a", 00:13:05.463 "assigned_rate_limits": { 00:13:05.463 "rw_ios_per_sec": 0, 00:13:05.463 "rw_mbytes_per_sec": 0, 00:13:05.463 "r_mbytes_per_sec": 0, 00:13:05.463 "w_mbytes_per_sec": 0 00:13:05.463 }, 00:13:05.463 "claimed": true, 00:13:05.463 "claim_type": "exclusive_write", 00:13:05.463 "zoned": false, 00:13:05.463 "supported_io_types": { 00:13:05.463 "read": true, 00:13:05.463 "write": true, 00:13:05.463 "unmap": true, 00:13:05.463 "flush": true, 00:13:05.463 "reset": true, 00:13:05.463 "nvme_admin": false, 00:13:05.463 "nvme_io": false, 00:13:05.463 "nvme_io_md": false, 00:13:05.463 "write_zeroes": true, 00:13:05.463 "zcopy": true, 00:13:05.463 "get_zone_info": false, 00:13:05.463 "zone_management": false, 00:13:05.463 "zone_append": false, 00:13:05.463 "compare": false, 00:13:05.463 "compare_and_write": false, 00:13:05.463 "abort": true, 00:13:05.463 "seek_hole": false, 00:13:05.463 "seek_data": false, 00:13:05.463 "copy": true, 00:13:05.463 "nvme_iov_md": false 00:13:05.463 }, 00:13:05.463 "memory_domains": [ 00:13:05.463 { 00:13:05.463 "dma_device_id": "system", 00:13:05.463 "dma_device_type": 1 00:13:05.463 }, 00:13:05.463 { 00:13:05.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.463 "dma_device_type": 2 00:13:05.463 } 00:13:05.463 ], 00:13:05.463 "driver_specific": {} 00:13:05.463 } 00:13:05.463 ] 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.463 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.721 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.721 "name": "Existed_Raid", 00:13:05.721 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:05.721 "strip_size_kb": 64, 00:13:05.721 "state": "configuring", 00:13:05.721 "raid_level": "raid0", 00:13:05.721 "superblock": true, 00:13:05.721 "num_base_bdevs": 4, 00:13:05.721 "num_base_bdevs_discovered": 2, 00:13:05.721 "num_base_bdevs_operational": 4, 00:13:05.721 "base_bdevs_list": [ 00:13:05.721 { 00:13:05.721 "name": "BaseBdev1", 00:13:05.721 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:05.721 "is_configured": true, 00:13:05.721 "data_offset": 2048, 00:13:05.721 "data_size": 63488 00:13:05.721 }, 00:13:05.721 { 00:13:05.721 "name": "BaseBdev2", 00:13:05.721 "uuid": "1cfffe6e-f5bf-44d2-a324-53d0a790440a", 00:13:05.721 "is_configured": true, 00:13:05.721 "data_offset": 2048, 00:13:05.721 "data_size": 63488 00:13:05.721 }, 00:13:05.721 { 00:13:05.721 "name": "BaseBdev3", 00:13:05.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.721 "is_configured": false, 00:13:05.721 "data_offset": 0, 00:13:05.721 "data_size": 0 00:13:05.721 }, 00:13:05.721 { 00:13:05.721 "name": "BaseBdev4", 00:13:05.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.721 "is_configured": false, 00:13:05.721 "data_offset": 0, 00:13:05.721 "data_size": 0 00:13:05.721 } 00:13:05.721 ] 00:13:05.721 }' 00:13:05.721 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.721 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.980 [2024-09-27 22:30:01.791379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.980 BaseBdev3 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.980 [ 00:13:05.980 { 00:13:05.980 "name": "BaseBdev3", 00:13:05.980 "aliases": [ 00:13:05.980 "65a88917-547b-473c-ad76-3e0fab3eed85" 00:13:05.980 ], 00:13:05.980 "product_name": "Malloc disk", 00:13:05.980 "block_size": 512, 00:13:05.980 "num_blocks": 65536, 00:13:05.980 "uuid": "65a88917-547b-473c-ad76-3e0fab3eed85", 00:13:05.980 "assigned_rate_limits": { 00:13:05.980 "rw_ios_per_sec": 0, 00:13:05.980 "rw_mbytes_per_sec": 0, 00:13:05.980 "r_mbytes_per_sec": 0, 00:13:05.980 "w_mbytes_per_sec": 0 00:13:05.980 }, 00:13:05.980 "claimed": true, 00:13:05.980 "claim_type": "exclusive_write", 00:13:05.980 "zoned": false, 00:13:05.980 "supported_io_types": { 00:13:05.980 "read": true, 00:13:05.980 "write": true, 00:13:05.980 "unmap": true, 00:13:05.980 "flush": true, 00:13:05.980 "reset": true, 00:13:05.980 "nvme_admin": false, 00:13:05.980 "nvme_io": false, 00:13:05.980 "nvme_io_md": false, 00:13:05.980 "write_zeroes": true, 00:13:05.980 "zcopy": true, 00:13:05.980 "get_zone_info": false, 00:13:05.980 "zone_management": false, 00:13:05.980 "zone_append": false, 00:13:05.980 "compare": false, 00:13:05.980 "compare_and_write": false, 00:13:05.980 "abort": true, 00:13:05.980 "seek_hole": false, 00:13:05.980 "seek_data": false, 00:13:05.980 "copy": true, 00:13:05.980 "nvme_iov_md": false 00:13:05.980 }, 00:13:05.980 "memory_domains": [ 00:13:05.980 { 00:13:05.980 "dma_device_id": "system", 00:13:05.980 "dma_device_type": 1 00:13:05.980 }, 00:13:05.980 { 00:13:05.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.980 "dma_device_type": 2 00:13:05.980 } 00:13:05.980 ], 00:13:05.980 "driver_specific": {} 00:13:05.980 } 00:13:05.980 ] 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.980 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.239 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.239 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.239 "name": "Existed_Raid", 00:13:06.239 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:06.239 "strip_size_kb": 64, 00:13:06.239 "state": "configuring", 00:13:06.239 "raid_level": "raid0", 00:13:06.239 "superblock": true, 00:13:06.239 "num_base_bdevs": 4, 00:13:06.239 "num_base_bdevs_discovered": 3, 00:13:06.239 "num_base_bdevs_operational": 4, 00:13:06.239 "base_bdevs_list": [ 00:13:06.239 { 00:13:06.239 "name": "BaseBdev1", 00:13:06.239 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:06.239 "is_configured": true, 00:13:06.239 "data_offset": 2048, 00:13:06.239 "data_size": 63488 00:13:06.239 }, 00:13:06.239 { 00:13:06.239 "name": "BaseBdev2", 00:13:06.239 "uuid": "1cfffe6e-f5bf-44d2-a324-53d0a790440a", 00:13:06.239 "is_configured": true, 00:13:06.239 "data_offset": 2048, 00:13:06.239 "data_size": 63488 00:13:06.239 }, 00:13:06.239 { 00:13:06.239 "name": "BaseBdev3", 00:13:06.239 "uuid": "65a88917-547b-473c-ad76-3e0fab3eed85", 00:13:06.239 "is_configured": true, 00:13:06.239 "data_offset": 2048, 00:13:06.239 "data_size": 63488 00:13:06.239 }, 00:13:06.239 { 00:13:06.239 "name": "BaseBdev4", 00:13:06.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.239 "is_configured": false, 00:13:06.239 "data_offset": 0, 00:13:06.239 "data_size": 0 00:13:06.239 } 00:13:06.239 ] 00:13:06.239 }' 00:13:06.239 22:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.239 22:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.498 BaseBdev4 00:13:06.498 [2024-09-27 22:30:02.344122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:06.498 [2024-09-27 22:30:02.344408] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:06.498 [2024-09-27 22:30:02.344426] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:06.498 [2024-09-27 22:30:02.344731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:06.498 [2024-09-27 22:30:02.344891] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:06.498 [2024-09-27 22:30:02.344908] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:06.498 [2024-09-27 22:30:02.345087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.498 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.498 [ 00:13:06.498 { 00:13:06.498 "name": "BaseBdev4", 00:13:06.498 "aliases": [ 00:13:06.498 "a3e88ea8-c008-4b73-b2cb-7f6896e1b7c1" 00:13:06.498 ], 00:13:06.498 "product_name": "Malloc disk", 00:13:06.498 "block_size": 512, 00:13:06.498 "num_blocks": 65536, 00:13:06.498 "uuid": "a3e88ea8-c008-4b73-b2cb-7f6896e1b7c1", 00:13:06.498 "assigned_rate_limits": { 00:13:06.498 "rw_ios_per_sec": 0, 00:13:06.498 "rw_mbytes_per_sec": 0, 00:13:06.757 "r_mbytes_per_sec": 0, 00:13:06.757 "w_mbytes_per_sec": 0 00:13:06.757 }, 00:13:06.757 "claimed": true, 00:13:06.757 "claim_type": "exclusive_write", 00:13:06.757 "zoned": false, 00:13:06.757 "supported_io_types": { 00:13:06.757 "read": true, 00:13:06.757 "write": true, 00:13:06.757 "unmap": true, 00:13:06.757 "flush": true, 00:13:06.757 "reset": true, 00:13:06.757 "nvme_admin": false, 00:13:06.757 "nvme_io": false, 00:13:06.757 "nvme_io_md": false, 00:13:06.757 "write_zeroes": true, 00:13:06.757 "zcopy": true, 00:13:06.757 "get_zone_info": false, 00:13:06.757 "zone_management": false, 00:13:06.757 "zone_append": false, 00:13:06.757 "compare": false, 00:13:06.757 "compare_and_write": false, 00:13:06.757 "abort": true, 00:13:06.757 "seek_hole": false, 00:13:06.757 "seek_data": false, 00:13:06.757 "copy": true, 00:13:06.757 "nvme_iov_md": false 00:13:06.757 }, 00:13:06.757 "memory_domains": [ 00:13:06.757 { 00:13:06.757 "dma_device_id": "system", 00:13:06.757 "dma_device_type": 1 00:13:06.757 }, 00:13:06.757 { 00:13:06.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.757 "dma_device_type": 2 00:13:06.757 } 00:13:06.757 ], 00:13:06.757 "driver_specific": {} 00:13:06.757 } 00:13:06.757 ] 00:13:06.757 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.758 "name": "Existed_Raid", 00:13:06.758 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:06.758 "strip_size_kb": 64, 00:13:06.758 "state": "online", 00:13:06.758 "raid_level": "raid0", 00:13:06.758 "superblock": true, 00:13:06.758 "num_base_bdevs": 4, 00:13:06.758 "num_base_bdevs_discovered": 4, 00:13:06.758 "num_base_bdevs_operational": 4, 00:13:06.758 "base_bdevs_list": [ 00:13:06.758 { 00:13:06.758 "name": "BaseBdev1", 00:13:06.758 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 }, 00:13:06.758 { 00:13:06.758 "name": "BaseBdev2", 00:13:06.758 "uuid": "1cfffe6e-f5bf-44d2-a324-53d0a790440a", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 }, 00:13:06.758 { 00:13:06.758 "name": "BaseBdev3", 00:13:06.758 "uuid": "65a88917-547b-473c-ad76-3e0fab3eed85", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 }, 00:13:06.758 { 00:13:06.758 "name": "BaseBdev4", 00:13:06.758 "uuid": "a3e88ea8-c008-4b73-b2cb-7f6896e1b7c1", 00:13:06.758 "is_configured": true, 00:13:06.758 "data_offset": 2048, 00:13:06.758 "data_size": 63488 00:13:06.758 } 00:13:06.758 ] 00:13:06.758 }' 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.758 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.067 [2024-09-27 22:30:02.816457] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.067 "name": "Existed_Raid", 00:13:07.067 "aliases": [ 00:13:07.067 "dae3be01-e45a-43a4-9fb5-31e70c63c890" 00:13:07.067 ], 00:13:07.067 "product_name": "Raid Volume", 00:13:07.067 "block_size": 512, 00:13:07.067 "num_blocks": 253952, 00:13:07.067 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:07.067 "assigned_rate_limits": { 00:13:07.067 "rw_ios_per_sec": 0, 00:13:07.067 "rw_mbytes_per_sec": 0, 00:13:07.067 "r_mbytes_per_sec": 0, 00:13:07.067 "w_mbytes_per_sec": 0 00:13:07.067 }, 00:13:07.067 "claimed": false, 00:13:07.067 "zoned": false, 00:13:07.067 "supported_io_types": { 00:13:07.067 "read": true, 00:13:07.067 "write": true, 00:13:07.067 "unmap": true, 00:13:07.067 "flush": true, 00:13:07.067 "reset": true, 00:13:07.067 "nvme_admin": false, 00:13:07.067 "nvme_io": false, 00:13:07.067 "nvme_io_md": false, 00:13:07.067 "write_zeroes": true, 00:13:07.067 "zcopy": false, 00:13:07.067 "get_zone_info": false, 00:13:07.067 "zone_management": false, 00:13:07.067 "zone_append": false, 00:13:07.067 "compare": false, 00:13:07.067 "compare_and_write": false, 00:13:07.067 "abort": false, 00:13:07.067 "seek_hole": false, 00:13:07.067 "seek_data": false, 00:13:07.067 "copy": false, 00:13:07.067 "nvme_iov_md": false 00:13:07.067 }, 00:13:07.067 "memory_domains": [ 00:13:07.067 { 00:13:07.067 "dma_device_id": "system", 00:13:07.067 "dma_device_type": 1 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.067 "dma_device_type": 2 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "system", 00:13:07.067 "dma_device_type": 1 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.067 "dma_device_type": 2 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "system", 00:13:07.067 "dma_device_type": 1 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.067 "dma_device_type": 2 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "system", 00:13:07.067 "dma_device_type": 1 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.067 "dma_device_type": 2 00:13:07.067 } 00:13:07.067 ], 00:13:07.067 "driver_specific": { 00:13:07.067 "raid": { 00:13:07.067 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:07.067 "strip_size_kb": 64, 00:13:07.067 "state": "online", 00:13:07.067 "raid_level": "raid0", 00:13:07.067 "superblock": true, 00:13:07.067 "num_base_bdevs": 4, 00:13:07.067 "num_base_bdevs_discovered": 4, 00:13:07.067 "num_base_bdevs_operational": 4, 00:13:07.067 "base_bdevs_list": [ 00:13:07.067 { 00:13:07.067 "name": "BaseBdev1", 00:13:07.067 "uuid": "08e758ea-0ef8-4954-8f30-f4a7539f0c04", 00:13:07.067 "is_configured": true, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "name": "BaseBdev2", 00:13:07.067 "uuid": "1cfffe6e-f5bf-44d2-a324-53d0a790440a", 00:13:07.067 "is_configured": true, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "name": "BaseBdev3", 00:13:07.067 "uuid": "65a88917-547b-473c-ad76-3e0fab3eed85", 00:13:07.067 "is_configured": true, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 }, 00:13:07.067 { 00:13:07.067 "name": "BaseBdev4", 00:13:07.067 "uuid": "a3e88ea8-c008-4b73-b2cb-7f6896e1b7c1", 00:13:07.067 "is_configured": true, 00:13:07.067 "data_offset": 2048, 00:13:07.067 "data_size": 63488 00:13:07.067 } 00:13:07.067 ] 00:13:07.067 } 00:13:07.067 } 00:13:07.067 }' 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:07.067 BaseBdev2 00:13:07.067 BaseBdev3 00:13:07.067 BaseBdev4' 00:13:07.067 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.326 22:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.326 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.326 [2024-09-27 22:30:03.136188] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.326 [2024-09-27 22:30:03.136232] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.326 [2024-09-27 22:30:03.136294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:07.585 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.586 "name": "Existed_Raid", 00:13:07.586 "uuid": "dae3be01-e45a-43a4-9fb5-31e70c63c890", 00:13:07.586 "strip_size_kb": 64, 00:13:07.586 "state": "offline", 00:13:07.586 "raid_level": "raid0", 00:13:07.586 "superblock": true, 00:13:07.586 "num_base_bdevs": 4, 00:13:07.586 "num_base_bdevs_discovered": 3, 00:13:07.586 "num_base_bdevs_operational": 3, 00:13:07.586 "base_bdevs_list": [ 00:13:07.586 { 00:13:07.586 "name": null, 00:13:07.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.586 "is_configured": false, 00:13:07.586 "data_offset": 0, 00:13:07.586 "data_size": 63488 00:13:07.586 }, 00:13:07.586 { 00:13:07.586 "name": "BaseBdev2", 00:13:07.586 "uuid": "1cfffe6e-f5bf-44d2-a324-53d0a790440a", 00:13:07.586 "is_configured": true, 00:13:07.586 "data_offset": 2048, 00:13:07.586 "data_size": 63488 00:13:07.586 }, 00:13:07.586 { 00:13:07.586 "name": "BaseBdev3", 00:13:07.586 "uuid": "65a88917-547b-473c-ad76-3e0fab3eed85", 00:13:07.586 "is_configured": true, 00:13:07.586 "data_offset": 2048, 00:13:07.586 "data_size": 63488 00:13:07.586 }, 00:13:07.586 { 00:13:07.586 "name": "BaseBdev4", 00:13:07.586 "uuid": "a3e88ea8-c008-4b73-b2cb-7f6896e1b7c1", 00:13:07.586 "is_configured": true, 00:13:07.586 "data_offset": 2048, 00:13:07.586 "data_size": 63488 00:13:07.586 } 00:13:07.586 ] 00:13:07.586 }' 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.586 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.844 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.844 [2024-09-27 22:30:03.685235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.104 [2024-09-27 22:30:03.844228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.104 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.363 22:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.363 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:08.363 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.363 22:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.363 [2024-09-27 22:30:04.005965] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:08.363 [2024-09-27 22:30:04.006041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:08.363 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.364 BaseBdev2 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.364 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.364 [ 00:13:08.364 { 00:13:08.364 "name": "BaseBdev2", 00:13:08.364 "aliases": [ 00:13:08.364 "311d7637-a171-41e9-ae26-d310047d92e9" 00:13:08.364 ], 00:13:08.364 "product_name": "Malloc disk", 00:13:08.364 "block_size": 512, 00:13:08.364 "num_blocks": 65536, 00:13:08.364 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:08.364 "assigned_rate_limits": { 00:13:08.364 "rw_ios_per_sec": 0, 00:13:08.364 "rw_mbytes_per_sec": 0, 00:13:08.364 "r_mbytes_per_sec": 0, 00:13:08.364 "w_mbytes_per_sec": 0 00:13:08.364 }, 00:13:08.364 "claimed": false, 00:13:08.364 "zoned": false, 00:13:08.623 "supported_io_types": { 00:13:08.623 "read": true, 00:13:08.623 "write": true, 00:13:08.623 "unmap": true, 00:13:08.623 "flush": true, 00:13:08.623 "reset": true, 00:13:08.623 "nvme_admin": false, 00:13:08.623 "nvme_io": false, 00:13:08.623 "nvme_io_md": false, 00:13:08.623 "write_zeroes": true, 00:13:08.623 "zcopy": true, 00:13:08.623 "get_zone_info": false, 00:13:08.623 "zone_management": false, 00:13:08.623 "zone_append": false, 00:13:08.623 "compare": false, 00:13:08.623 "compare_and_write": false, 00:13:08.623 "abort": true, 00:13:08.623 "seek_hole": false, 00:13:08.623 "seek_data": false, 00:13:08.623 "copy": true, 00:13:08.623 "nvme_iov_md": false 00:13:08.623 }, 00:13:08.623 "memory_domains": [ 00:13:08.623 { 00:13:08.623 "dma_device_id": "system", 00:13:08.623 "dma_device_type": 1 00:13:08.623 }, 00:13:08.623 { 00:13:08.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.623 "dma_device_type": 2 00:13:08.623 } 00:13:08.623 ], 00:13:08.623 "driver_specific": {} 00:13:08.623 } 00:13:08.623 ] 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.623 BaseBdev3 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.623 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.623 [ 00:13:08.623 { 00:13:08.623 "name": "BaseBdev3", 00:13:08.623 "aliases": [ 00:13:08.623 "a1ad1868-3284-425b-88ca-60f03a7c4cd1" 00:13:08.623 ], 00:13:08.623 "product_name": "Malloc disk", 00:13:08.623 "block_size": 512, 00:13:08.623 "num_blocks": 65536, 00:13:08.623 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:08.623 "assigned_rate_limits": { 00:13:08.623 "rw_ios_per_sec": 0, 00:13:08.623 "rw_mbytes_per_sec": 0, 00:13:08.623 "r_mbytes_per_sec": 0, 00:13:08.623 "w_mbytes_per_sec": 0 00:13:08.623 }, 00:13:08.623 "claimed": false, 00:13:08.623 "zoned": false, 00:13:08.623 "supported_io_types": { 00:13:08.623 "read": true, 00:13:08.623 "write": true, 00:13:08.623 "unmap": true, 00:13:08.623 "flush": true, 00:13:08.623 "reset": true, 00:13:08.623 "nvme_admin": false, 00:13:08.623 "nvme_io": false, 00:13:08.623 "nvme_io_md": false, 00:13:08.623 "write_zeroes": true, 00:13:08.623 "zcopy": true, 00:13:08.623 "get_zone_info": false, 00:13:08.623 "zone_management": false, 00:13:08.623 "zone_append": false, 00:13:08.623 "compare": false, 00:13:08.623 "compare_and_write": false, 00:13:08.624 "abort": true, 00:13:08.624 "seek_hole": false, 00:13:08.624 "seek_data": false, 00:13:08.624 "copy": true, 00:13:08.624 "nvme_iov_md": false 00:13:08.624 }, 00:13:08.624 "memory_domains": [ 00:13:08.624 { 00:13:08.624 "dma_device_id": "system", 00:13:08.624 "dma_device_type": 1 00:13:08.624 }, 00:13:08.624 { 00:13:08.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.624 "dma_device_type": 2 00:13:08.624 } 00:13:08.624 ], 00:13:08.624 "driver_specific": {} 00:13:08.624 } 00:13:08.624 ] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.624 BaseBdev4 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.624 [ 00:13:08.624 { 00:13:08.624 "name": "BaseBdev4", 00:13:08.624 "aliases": [ 00:13:08.624 "cce9466b-2d07-415a-8870-82b9c5276db7" 00:13:08.624 ], 00:13:08.624 "product_name": "Malloc disk", 00:13:08.624 "block_size": 512, 00:13:08.624 "num_blocks": 65536, 00:13:08.624 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:08.624 "assigned_rate_limits": { 00:13:08.624 "rw_ios_per_sec": 0, 00:13:08.624 "rw_mbytes_per_sec": 0, 00:13:08.624 "r_mbytes_per_sec": 0, 00:13:08.624 "w_mbytes_per_sec": 0 00:13:08.624 }, 00:13:08.624 "claimed": false, 00:13:08.624 "zoned": false, 00:13:08.624 "supported_io_types": { 00:13:08.624 "read": true, 00:13:08.624 "write": true, 00:13:08.624 "unmap": true, 00:13:08.624 "flush": true, 00:13:08.624 "reset": true, 00:13:08.624 "nvme_admin": false, 00:13:08.624 "nvme_io": false, 00:13:08.624 "nvme_io_md": false, 00:13:08.624 "write_zeroes": true, 00:13:08.624 "zcopy": true, 00:13:08.624 "get_zone_info": false, 00:13:08.624 "zone_management": false, 00:13:08.624 "zone_append": false, 00:13:08.624 "compare": false, 00:13:08.624 "compare_and_write": false, 00:13:08.624 "abort": true, 00:13:08.624 "seek_hole": false, 00:13:08.624 "seek_data": false, 00:13:08.624 "copy": true, 00:13:08.624 "nvme_iov_md": false 00:13:08.624 }, 00:13:08.624 "memory_domains": [ 00:13:08.624 { 00:13:08.624 "dma_device_id": "system", 00:13:08.624 "dma_device_type": 1 00:13:08.624 }, 00:13:08.624 { 00:13:08.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.624 "dma_device_type": 2 00:13:08.624 } 00:13:08.624 ], 00:13:08.624 "driver_specific": {} 00:13:08.624 } 00:13:08.624 ] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.624 [2024-09-27 22:30:04.452108] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:08.624 [2024-09-27 22:30:04.452163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:08.624 [2024-09-27 22:30:04.452194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.624 [2024-09-27 22:30:04.454511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.624 [2024-09-27 22:30:04.454578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.624 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.882 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.882 "name": "Existed_Raid", 00:13:08.882 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:08.882 "strip_size_kb": 64, 00:13:08.882 "state": "configuring", 00:13:08.882 "raid_level": "raid0", 00:13:08.882 "superblock": true, 00:13:08.882 "num_base_bdevs": 4, 00:13:08.882 "num_base_bdevs_discovered": 3, 00:13:08.882 "num_base_bdevs_operational": 4, 00:13:08.882 "base_bdevs_list": [ 00:13:08.882 { 00:13:08.882 "name": "BaseBdev1", 00:13:08.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.882 "is_configured": false, 00:13:08.882 "data_offset": 0, 00:13:08.882 "data_size": 0 00:13:08.882 }, 00:13:08.882 { 00:13:08.882 "name": "BaseBdev2", 00:13:08.882 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:08.882 "is_configured": true, 00:13:08.882 "data_offset": 2048, 00:13:08.882 "data_size": 63488 00:13:08.882 }, 00:13:08.882 { 00:13:08.882 "name": "BaseBdev3", 00:13:08.882 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:08.882 "is_configured": true, 00:13:08.882 "data_offset": 2048, 00:13:08.882 "data_size": 63488 00:13:08.882 }, 00:13:08.882 { 00:13:08.882 "name": "BaseBdev4", 00:13:08.882 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:08.882 "is_configured": true, 00:13:08.882 "data_offset": 2048, 00:13:08.882 "data_size": 63488 00:13:08.882 } 00:13:08.883 ] 00:13:08.883 }' 00:13:08.883 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.883 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.141 [2024-09-27 22:30:04.867539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.141 "name": "Existed_Raid", 00:13:09.141 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:09.141 "strip_size_kb": 64, 00:13:09.141 "state": "configuring", 00:13:09.141 "raid_level": "raid0", 00:13:09.141 "superblock": true, 00:13:09.141 "num_base_bdevs": 4, 00:13:09.141 "num_base_bdevs_discovered": 2, 00:13:09.141 "num_base_bdevs_operational": 4, 00:13:09.141 "base_bdevs_list": [ 00:13:09.141 { 00:13:09.141 "name": "BaseBdev1", 00:13:09.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.141 "is_configured": false, 00:13:09.141 "data_offset": 0, 00:13:09.141 "data_size": 0 00:13:09.141 }, 00:13:09.141 { 00:13:09.141 "name": null, 00:13:09.141 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:09.141 "is_configured": false, 00:13:09.141 "data_offset": 0, 00:13:09.141 "data_size": 63488 00:13:09.141 }, 00:13:09.141 { 00:13:09.141 "name": "BaseBdev3", 00:13:09.141 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:09.141 "is_configured": true, 00:13:09.141 "data_offset": 2048, 00:13:09.141 "data_size": 63488 00:13:09.141 }, 00:13:09.141 { 00:13:09.141 "name": "BaseBdev4", 00:13:09.141 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:09.141 "is_configured": true, 00:13:09.141 "data_offset": 2048, 00:13:09.141 "data_size": 63488 00:13:09.141 } 00:13:09.141 ] 00:13:09.141 }' 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.141 22:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 [2024-09-27 22:30:05.453575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.707 BaseBdev1 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 [ 00:13:09.707 { 00:13:09.707 "name": "BaseBdev1", 00:13:09.707 "aliases": [ 00:13:09.707 "82e83c18-99f2-452a-8d05-c45667b117e7" 00:13:09.707 ], 00:13:09.707 "product_name": "Malloc disk", 00:13:09.707 "block_size": 512, 00:13:09.707 "num_blocks": 65536, 00:13:09.707 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:09.707 "assigned_rate_limits": { 00:13:09.707 "rw_ios_per_sec": 0, 00:13:09.707 "rw_mbytes_per_sec": 0, 00:13:09.707 "r_mbytes_per_sec": 0, 00:13:09.707 "w_mbytes_per_sec": 0 00:13:09.707 }, 00:13:09.707 "claimed": true, 00:13:09.707 "claim_type": "exclusive_write", 00:13:09.707 "zoned": false, 00:13:09.707 "supported_io_types": { 00:13:09.707 "read": true, 00:13:09.707 "write": true, 00:13:09.707 "unmap": true, 00:13:09.707 "flush": true, 00:13:09.707 "reset": true, 00:13:09.707 "nvme_admin": false, 00:13:09.707 "nvme_io": false, 00:13:09.707 "nvme_io_md": false, 00:13:09.707 "write_zeroes": true, 00:13:09.707 "zcopy": true, 00:13:09.707 "get_zone_info": false, 00:13:09.707 "zone_management": false, 00:13:09.707 "zone_append": false, 00:13:09.707 "compare": false, 00:13:09.707 "compare_and_write": false, 00:13:09.707 "abort": true, 00:13:09.707 "seek_hole": false, 00:13:09.707 "seek_data": false, 00:13:09.707 "copy": true, 00:13:09.707 "nvme_iov_md": false 00:13:09.707 }, 00:13:09.707 "memory_domains": [ 00:13:09.707 { 00:13:09.707 "dma_device_id": "system", 00:13:09.707 "dma_device_type": 1 00:13:09.707 }, 00:13:09.707 { 00:13:09.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.707 "dma_device_type": 2 00:13:09.707 } 00:13:09.707 ], 00:13:09.707 "driver_specific": {} 00:13:09.707 } 00:13:09.707 ] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.707 "name": "Existed_Raid", 00:13:09.707 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:09.707 "strip_size_kb": 64, 00:13:09.707 "state": "configuring", 00:13:09.707 "raid_level": "raid0", 00:13:09.707 "superblock": true, 00:13:09.707 "num_base_bdevs": 4, 00:13:09.707 "num_base_bdevs_discovered": 3, 00:13:09.707 "num_base_bdevs_operational": 4, 00:13:09.707 "base_bdevs_list": [ 00:13:09.707 { 00:13:09.707 "name": "BaseBdev1", 00:13:09.707 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:09.707 "is_configured": true, 00:13:09.707 "data_offset": 2048, 00:13:09.707 "data_size": 63488 00:13:09.707 }, 00:13:09.707 { 00:13:09.707 "name": null, 00:13:09.707 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:09.707 "is_configured": false, 00:13:09.707 "data_offset": 0, 00:13:09.707 "data_size": 63488 00:13:09.707 }, 00:13:09.707 { 00:13:09.707 "name": "BaseBdev3", 00:13:09.707 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:09.707 "is_configured": true, 00:13:09.707 "data_offset": 2048, 00:13:09.707 "data_size": 63488 00:13:09.707 }, 00:13:09.707 { 00:13:09.707 "name": "BaseBdev4", 00:13:09.707 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:09.707 "is_configured": true, 00:13:09.707 "data_offset": 2048, 00:13:09.707 "data_size": 63488 00:13:09.707 } 00:13:09.707 ] 00:13:09.707 }' 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.707 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.285 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:10.285 22:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.285 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.286 22:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.286 [2024-09-27 22:30:06.029240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.286 "name": "Existed_Raid", 00:13:10.286 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:10.286 "strip_size_kb": 64, 00:13:10.286 "state": "configuring", 00:13:10.286 "raid_level": "raid0", 00:13:10.286 "superblock": true, 00:13:10.286 "num_base_bdevs": 4, 00:13:10.286 "num_base_bdevs_discovered": 2, 00:13:10.286 "num_base_bdevs_operational": 4, 00:13:10.286 "base_bdevs_list": [ 00:13:10.286 { 00:13:10.286 "name": "BaseBdev1", 00:13:10.286 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:10.286 "is_configured": true, 00:13:10.286 "data_offset": 2048, 00:13:10.286 "data_size": 63488 00:13:10.286 }, 00:13:10.286 { 00:13:10.286 "name": null, 00:13:10.286 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:10.286 "is_configured": false, 00:13:10.286 "data_offset": 0, 00:13:10.286 "data_size": 63488 00:13:10.286 }, 00:13:10.286 { 00:13:10.286 "name": null, 00:13:10.286 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:10.286 "is_configured": false, 00:13:10.286 "data_offset": 0, 00:13:10.286 "data_size": 63488 00:13:10.286 }, 00:13:10.286 { 00:13:10.286 "name": "BaseBdev4", 00:13:10.286 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:10.286 "is_configured": true, 00:13:10.286 "data_offset": 2048, 00:13:10.286 "data_size": 63488 00:13:10.286 } 00:13:10.286 ] 00:13:10.286 }' 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.286 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.568 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.568 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:10.568 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.568 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.833 [2024-09-27 22:30:06.485183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.833 "name": "Existed_Raid", 00:13:10.833 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:10.833 "strip_size_kb": 64, 00:13:10.833 "state": "configuring", 00:13:10.833 "raid_level": "raid0", 00:13:10.833 "superblock": true, 00:13:10.833 "num_base_bdevs": 4, 00:13:10.833 "num_base_bdevs_discovered": 3, 00:13:10.833 "num_base_bdevs_operational": 4, 00:13:10.833 "base_bdevs_list": [ 00:13:10.833 { 00:13:10.833 "name": "BaseBdev1", 00:13:10.833 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:10.833 "is_configured": true, 00:13:10.833 "data_offset": 2048, 00:13:10.833 "data_size": 63488 00:13:10.833 }, 00:13:10.833 { 00:13:10.833 "name": null, 00:13:10.833 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:10.833 "is_configured": false, 00:13:10.833 "data_offset": 0, 00:13:10.833 "data_size": 63488 00:13:10.833 }, 00:13:10.833 { 00:13:10.833 "name": "BaseBdev3", 00:13:10.833 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:10.833 "is_configured": true, 00:13:10.833 "data_offset": 2048, 00:13:10.833 "data_size": 63488 00:13:10.833 }, 00:13:10.833 { 00:13:10.833 "name": "BaseBdev4", 00:13:10.833 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:10.833 "is_configured": true, 00:13:10.833 "data_offset": 2048, 00:13:10.833 "data_size": 63488 00:13:10.833 } 00:13:10.833 ] 00:13:10.833 }' 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.833 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.091 22:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.091 [2024-09-27 22:30:06.960594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.348 "name": "Existed_Raid", 00:13:11.348 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:11.348 "strip_size_kb": 64, 00:13:11.348 "state": "configuring", 00:13:11.348 "raid_level": "raid0", 00:13:11.348 "superblock": true, 00:13:11.348 "num_base_bdevs": 4, 00:13:11.348 "num_base_bdevs_discovered": 2, 00:13:11.348 "num_base_bdevs_operational": 4, 00:13:11.348 "base_bdevs_list": [ 00:13:11.348 { 00:13:11.348 "name": null, 00:13:11.348 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:11.348 "is_configured": false, 00:13:11.348 "data_offset": 0, 00:13:11.348 "data_size": 63488 00:13:11.348 }, 00:13:11.348 { 00:13:11.348 "name": null, 00:13:11.348 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:11.348 "is_configured": false, 00:13:11.348 "data_offset": 0, 00:13:11.348 "data_size": 63488 00:13:11.348 }, 00:13:11.348 { 00:13:11.348 "name": "BaseBdev3", 00:13:11.348 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:11.348 "is_configured": true, 00:13:11.348 "data_offset": 2048, 00:13:11.348 "data_size": 63488 00:13:11.348 }, 00:13:11.348 { 00:13:11.348 "name": "BaseBdev4", 00:13:11.348 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:11.348 "is_configured": true, 00:13:11.348 "data_offset": 2048, 00:13:11.348 "data_size": 63488 00:13:11.348 } 00:13:11.348 ] 00:13:11.348 }' 00:13:11.348 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.349 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.916 [2024-09-27 22:30:07.545308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.916 "name": "Existed_Raid", 00:13:11.916 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:11.916 "strip_size_kb": 64, 00:13:11.916 "state": "configuring", 00:13:11.916 "raid_level": "raid0", 00:13:11.916 "superblock": true, 00:13:11.916 "num_base_bdevs": 4, 00:13:11.916 "num_base_bdevs_discovered": 3, 00:13:11.916 "num_base_bdevs_operational": 4, 00:13:11.916 "base_bdevs_list": [ 00:13:11.916 { 00:13:11.916 "name": null, 00:13:11.916 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:11.916 "is_configured": false, 00:13:11.916 "data_offset": 0, 00:13:11.916 "data_size": 63488 00:13:11.916 }, 00:13:11.916 { 00:13:11.916 "name": "BaseBdev2", 00:13:11.916 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:11.916 "is_configured": true, 00:13:11.916 "data_offset": 2048, 00:13:11.916 "data_size": 63488 00:13:11.916 }, 00:13:11.916 { 00:13:11.916 "name": "BaseBdev3", 00:13:11.916 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:11.916 "is_configured": true, 00:13:11.916 "data_offset": 2048, 00:13:11.916 "data_size": 63488 00:13:11.916 }, 00:13:11.916 { 00:13:11.916 "name": "BaseBdev4", 00:13:11.916 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:11.916 "is_configured": true, 00:13:11.916 "data_offset": 2048, 00:13:11.916 "data_size": 63488 00:13:11.916 } 00:13:11.916 ] 00:13:11.916 }' 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.916 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.176 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.176 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.176 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.176 22:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:12.176 22:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.176 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:12.176 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.176 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:12.176 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.176 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.176 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.435 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82e83c18-99f2-452a-8d05-c45667b117e7 00:13:12.435 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.435 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.435 [2024-09-27 22:30:08.111187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:12.435 [2024-09-27 22:30:08.111464] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:12.436 [2024-09-27 22:30:08.111480] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:12.436 [2024-09-27 22:30:08.111775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:12.436 [2024-09-27 22:30:08.111924] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:12.436 [2024-09-27 22:30:08.111939] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:12.436 [2024-09-27 22:30:08.112101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.436 NewBaseBdev 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.436 [ 00:13:12.436 { 00:13:12.436 "name": "NewBaseBdev", 00:13:12.436 "aliases": [ 00:13:12.436 "82e83c18-99f2-452a-8d05-c45667b117e7" 00:13:12.436 ], 00:13:12.436 "product_name": "Malloc disk", 00:13:12.436 "block_size": 512, 00:13:12.436 "num_blocks": 65536, 00:13:12.436 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:12.436 "assigned_rate_limits": { 00:13:12.436 "rw_ios_per_sec": 0, 00:13:12.436 "rw_mbytes_per_sec": 0, 00:13:12.436 "r_mbytes_per_sec": 0, 00:13:12.436 "w_mbytes_per_sec": 0 00:13:12.436 }, 00:13:12.436 "claimed": true, 00:13:12.436 "claim_type": "exclusive_write", 00:13:12.436 "zoned": false, 00:13:12.436 "supported_io_types": { 00:13:12.436 "read": true, 00:13:12.436 "write": true, 00:13:12.436 "unmap": true, 00:13:12.436 "flush": true, 00:13:12.436 "reset": true, 00:13:12.436 "nvme_admin": false, 00:13:12.436 "nvme_io": false, 00:13:12.436 "nvme_io_md": false, 00:13:12.436 "write_zeroes": true, 00:13:12.436 "zcopy": true, 00:13:12.436 "get_zone_info": false, 00:13:12.436 "zone_management": false, 00:13:12.436 "zone_append": false, 00:13:12.436 "compare": false, 00:13:12.436 "compare_and_write": false, 00:13:12.436 "abort": true, 00:13:12.436 "seek_hole": false, 00:13:12.436 "seek_data": false, 00:13:12.436 "copy": true, 00:13:12.436 "nvme_iov_md": false 00:13:12.436 }, 00:13:12.436 "memory_domains": [ 00:13:12.436 { 00:13:12.436 "dma_device_id": "system", 00:13:12.436 "dma_device_type": 1 00:13:12.436 }, 00:13:12.436 { 00:13:12.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.436 "dma_device_type": 2 00:13:12.436 } 00:13:12.436 ], 00:13:12.436 "driver_specific": {} 00:13:12.436 } 00:13:12.436 ] 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.436 "name": "Existed_Raid", 00:13:12.436 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:12.436 "strip_size_kb": 64, 00:13:12.436 "state": "online", 00:13:12.436 "raid_level": "raid0", 00:13:12.436 "superblock": true, 00:13:12.436 "num_base_bdevs": 4, 00:13:12.436 "num_base_bdevs_discovered": 4, 00:13:12.436 "num_base_bdevs_operational": 4, 00:13:12.436 "base_bdevs_list": [ 00:13:12.436 { 00:13:12.436 "name": "NewBaseBdev", 00:13:12.436 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:12.436 "is_configured": true, 00:13:12.436 "data_offset": 2048, 00:13:12.436 "data_size": 63488 00:13:12.436 }, 00:13:12.436 { 00:13:12.436 "name": "BaseBdev2", 00:13:12.436 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:12.436 "is_configured": true, 00:13:12.436 "data_offset": 2048, 00:13:12.436 "data_size": 63488 00:13:12.436 }, 00:13:12.436 { 00:13:12.436 "name": "BaseBdev3", 00:13:12.436 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:12.436 "is_configured": true, 00:13:12.436 "data_offset": 2048, 00:13:12.436 "data_size": 63488 00:13:12.436 }, 00:13:12.436 { 00:13:12.436 "name": "BaseBdev4", 00:13:12.436 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:12.436 "is_configured": true, 00:13:12.436 "data_offset": 2048, 00:13:12.436 "data_size": 63488 00:13:12.436 } 00:13:12.436 ] 00:13:12.436 }' 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.436 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.004 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:13.004 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:13.004 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.004 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.004 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.005 [2024-09-27 22:30:08.587351] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.005 "name": "Existed_Raid", 00:13:13.005 "aliases": [ 00:13:13.005 "778c82e2-479e-4f72-9ac8-b963a1c36506" 00:13:13.005 ], 00:13:13.005 "product_name": "Raid Volume", 00:13:13.005 "block_size": 512, 00:13:13.005 "num_blocks": 253952, 00:13:13.005 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:13.005 "assigned_rate_limits": { 00:13:13.005 "rw_ios_per_sec": 0, 00:13:13.005 "rw_mbytes_per_sec": 0, 00:13:13.005 "r_mbytes_per_sec": 0, 00:13:13.005 "w_mbytes_per_sec": 0 00:13:13.005 }, 00:13:13.005 "claimed": false, 00:13:13.005 "zoned": false, 00:13:13.005 "supported_io_types": { 00:13:13.005 "read": true, 00:13:13.005 "write": true, 00:13:13.005 "unmap": true, 00:13:13.005 "flush": true, 00:13:13.005 "reset": true, 00:13:13.005 "nvme_admin": false, 00:13:13.005 "nvme_io": false, 00:13:13.005 "nvme_io_md": false, 00:13:13.005 "write_zeroes": true, 00:13:13.005 "zcopy": false, 00:13:13.005 "get_zone_info": false, 00:13:13.005 "zone_management": false, 00:13:13.005 "zone_append": false, 00:13:13.005 "compare": false, 00:13:13.005 "compare_and_write": false, 00:13:13.005 "abort": false, 00:13:13.005 "seek_hole": false, 00:13:13.005 "seek_data": false, 00:13:13.005 "copy": false, 00:13:13.005 "nvme_iov_md": false 00:13:13.005 }, 00:13:13.005 "memory_domains": [ 00:13:13.005 { 00:13:13.005 "dma_device_id": "system", 00:13:13.005 "dma_device_type": 1 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.005 "dma_device_type": 2 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "system", 00:13:13.005 "dma_device_type": 1 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.005 "dma_device_type": 2 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "system", 00:13:13.005 "dma_device_type": 1 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.005 "dma_device_type": 2 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "system", 00:13:13.005 "dma_device_type": 1 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.005 "dma_device_type": 2 00:13:13.005 } 00:13:13.005 ], 00:13:13.005 "driver_specific": { 00:13:13.005 "raid": { 00:13:13.005 "uuid": "778c82e2-479e-4f72-9ac8-b963a1c36506", 00:13:13.005 "strip_size_kb": 64, 00:13:13.005 "state": "online", 00:13:13.005 "raid_level": "raid0", 00:13:13.005 "superblock": true, 00:13:13.005 "num_base_bdevs": 4, 00:13:13.005 "num_base_bdevs_discovered": 4, 00:13:13.005 "num_base_bdevs_operational": 4, 00:13:13.005 "base_bdevs_list": [ 00:13:13.005 { 00:13:13.005 "name": "NewBaseBdev", 00:13:13.005 "uuid": "82e83c18-99f2-452a-8d05-c45667b117e7", 00:13:13.005 "is_configured": true, 00:13:13.005 "data_offset": 2048, 00:13:13.005 "data_size": 63488 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "name": "BaseBdev2", 00:13:13.005 "uuid": "311d7637-a171-41e9-ae26-d310047d92e9", 00:13:13.005 "is_configured": true, 00:13:13.005 "data_offset": 2048, 00:13:13.005 "data_size": 63488 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "name": "BaseBdev3", 00:13:13.005 "uuid": "a1ad1868-3284-425b-88ca-60f03a7c4cd1", 00:13:13.005 "is_configured": true, 00:13:13.005 "data_offset": 2048, 00:13:13.005 "data_size": 63488 00:13:13.005 }, 00:13:13.005 { 00:13:13.005 "name": "BaseBdev4", 00:13:13.005 "uuid": "cce9466b-2d07-415a-8870-82b9c5276db7", 00:13:13.005 "is_configured": true, 00:13:13.005 "data_offset": 2048, 00:13:13.005 "data_size": 63488 00:13:13.005 } 00:13:13.005 ] 00:13:13.005 } 00:13:13.005 } 00:13:13.005 }' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:13.005 BaseBdev2 00:13:13.005 BaseBdev3 00:13:13.005 BaseBdev4' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.005 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.264 [2024-09-27 22:30:08.902507] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.264 [2024-09-27 22:30:08.902552] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.264 [2024-09-27 22:30:08.902647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.264 [2024-09-27 22:30:08.902721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.264 [2024-09-27 22:30:08.902734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70720 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70720 ']' 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70720 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70720 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70720' 00:13:13.264 killing process with pid 70720 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70720 00:13:13.264 [2024-09-27 22:30:08.957651] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.264 22:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70720 00:13:13.557 [2024-09-27 22:30:09.398549] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.098 22:30:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:16.098 00:13:16.098 real 0m13.110s 00:13:16.098 user 0m19.931s 00:13:16.098 sys 0m2.389s 00:13:16.098 22:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.098 22:30:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.098 ************************************ 00:13:16.098 END TEST raid_state_function_test_sb 00:13:16.098 ************************************ 00:13:16.098 22:30:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:16.098 22:30:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:16.098 22:30:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.098 22:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.098 ************************************ 00:13:16.098 START TEST raid_superblock_test 00:13:16.098 ************************************ 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71406 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71406 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71406 ']' 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.098 22:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.098 [2024-09-27 22:30:11.727733] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:13:16.098 [2024-09-27 22:30:11.727880] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 00:13:16.098 [2024-09-27 22:30:11.891725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.357 [2024-09-27 22:30:12.143481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.615 [2024-09-27 22:30:12.402290] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.615 [2024-09-27 22:30:12.402334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.193 malloc1 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.193 [2024-09-27 22:30:12.974424] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:17.193 [2024-09-27 22:30:12.974524] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.193 [2024-09-27 22:30:12.974563] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:17.193 [2024-09-27 22:30:12.974580] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.193 [2024-09-27 22:30:12.977317] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.193 [2024-09-27 22:30:12.977372] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:17.193 pt1 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.193 22:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.193 malloc2 00:13:17.193 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.193 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:17.193 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.193 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.193 [2024-09-27 22:30:13.039665] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:17.193 [2024-09-27 22:30:13.039740] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.193 [2024-09-27 22:30:13.039773] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:17.194 [2024-09-27 22:30:13.039785] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.194 [2024-09-27 22:30:13.042451] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.194 [2024-09-27 22:30:13.042499] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:17.194 pt2 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.194 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.453 malloc3 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.453 [2024-09-27 22:30:13.104010] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:17.453 [2024-09-27 22:30:13.104095] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.453 [2024-09-27 22:30:13.104121] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:17.453 [2024-09-27 22:30:13.104134] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.453 [2024-09-27 22:30:13.106768] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.453 [2024-09-27 22:30:13.106822] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:17.453 pt3 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.453 malloc4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.453 [2024-09-27 22:30:13.168806] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:17.453 [2024-09-27 22:30:13.169074] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.453 [2024-09-27 22:30:13.169128] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:17.453 [2024-09-27 22:30:13.169142] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.453 [2024-09-27 22:30:13.171882] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.453 [2024-09-27 22:30:13.171932] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:17.453 pt4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.453 [2024-09-27 22:30:13.180934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:17.453 [2024-09-27 22:30:13.183282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:17.453 [2024-09-27 22:30:13.183387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:17.453 [2024-09-27 22:30:13.183458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:17.453 [2024-09-27 22:30:13.183678] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:17.453 [2024-09-27 22:30:13.183698] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:17.453 [2024-09-27 22:30:13.184040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:17.453 [2024-09-27 22:30:13.184236] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:17.453 [2024-09-27 22:30:13.184252] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:17.453 [2024-09-27 22:30:13.184430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.453 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.453 "name": "raid_bdev1", 00:13:17.453 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:17.453 "strip_size_kb": 64, 00:13:17.453 "state": "online", 00:13:17.453 "raid_level": "raid0", 00:13:17.453 "superblock": true, 00:13:17.453 "num_base_bdevs": 4, 00:13:17.453 "num_base_bdevs_discovered": 4, 00:13:17.453 "num_base_bdevs_operational": 4, 00:13:17.453 "base_bdevs_list": [ 00:13:17.453 { 00:13:17.453 "name": "pt1", 00:13:17.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.453 "is_configured": true, 00:13:17.453 "data_offset": 2048, 00:13:17.454 "data_size": 63488 00:13:17.454 }, 00:13:17.454 { 00:13:17.454 "name": "pt2", 00:13:17.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.454 "is_configured": true, 00:13:17.454 "data_offset": 2048, 00:13:17.454 "data_size": 63488 00:13:17.454 }, 00:13:17.454 { 00:13:17.454 "name": "pt3", 00:13:17.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.454 "is_configured": true, 00:13:17.454 "data_offset": 2048, 00:13:17.454 "data_size": 63488 00:13:17.454 }, 00:13:17.454 { 00:13:17.454 "name": "pt4", 00:13:17.454 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:17.454 "is_configured": true, 00:13:17.454 "data_offset": 2048, 00:13:17.454 "data_size": 63488 00:13:17.454 } 00:13:17.454 ] 00:13:17.454 }' 00:13:17.454 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.454 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.021 [2024-09-27 22:30:13.664549] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.021 "name": "raid_bdev1", 00:13:18.021 "aliases": [ 00:13:18.021 "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6" 00:13:18.021 ], 00:13:18.021 "product_name": "Raid Volume", 00:13:18.021 "block_size": 512, 00:13:18.021 "num_blocks": 253952, 00:13:18.021 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:18.021 "assigned_rate_limits": { 00:13:18.021 "rw_ios_per_sec": 0, 00:13:18.021 "rw_mbytes_per_sec": 0, 00:13:18.021 "r_mbytes_per_sec": 0, 00:13:18.021 "w_mbytes_per_sec": 0 00:13:18.021 }, 00:13:18.021 "claimed": false, 00:13:18.021 "zoned": false, 00:13:18.021 "supported_io_types": { 00:13:18.021 "read": true, 00:13:18.021 "write": true, 00:13:18.021 "unmap": true, 00:13:18.021 "flush": true, 00:13:18.021 "reset": true, 00:13:18.021 "nvme_admin": false, 00:13:18.021 "nvme_io": false, 00:13:18.021 "nvme_io_md": false, 00:13:18.021 "write_zeroes": true, 00:13:18.021 "zcopy": false, 00:13:18.021 "get_zone_info": false, 00:13:18.021 "zone_management": false, 00:13:18.021 "zone_append": false, 00:13:18.021 "compare": false, 00:13:18.021 "compare_and_write": false, 00:13:18.021 "abort": false, 00:13:18.021 "seek_hole": false, 00:13:18.021 "seek_data": false, 00:13:18.021 "copy": false, 00:13:18.021 "nvme_iov_md": false 00:13:18.021 }, 00:13:18.021 "memory_domains": [ 00:13:18.021 { 00:13:18.021 "dma_device_id": "system", 00:13:18.021 "dma_device_type": 1 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.021 "dma_device_type": 2 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "system", 00:13:18.021 "dma_device_type": 1 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.021 "dma_device_type": 2 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "system", 00:13:18.021 "dma_device_type": 1 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.021 "dma_device_type": 2 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "system", 00:13:18.021 "dma_device_type": 1 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.021 "dma_device_type": 2 00:13:18.021 } 00:13:18.021 ], 00:13:18.021 "driver_specific": { 00:13:18.021 "raid": { 00:13:18.021 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:18.021 "strip_size_kb": 64, 00:13:18.021 "state": "online", 00:13:18.021 "raid_level": "raid0", 00:13:18.021 "superblock": true, 00:13:18.021 "num_base_bdevs": 4, 00:13:18.021 "num_base_bdevs_discovered": 4, 00:13:18.021 "num_base_bdevs_operational": 4, 00:13:18.021 "base_bdevs_list": [ 00:13:18.021 { 00:13:18.021 "name": "pt1", 00:13:18.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.021 "is_configured": true, 00:13:18.021 "data_offset": 2048, 00:13:18.021 "data_size": 63488 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "name": "pt2", 00:13:18.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.021 "is_configured": true, 00:13:18.021 "data_offset": 2048, 00:13:18.021 "data_size": 63488 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "name": "pt3", 00:13:18.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.021 "is_configured": true, 00:13:18.021 "data_offset": 2048, 00:13:18.021 "data_size": 63488 00:13:18.021 }, 00:13:18.021 { 00:13:18.021 "name": "pt4", 00:13:18.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:18.021 "is_configured": true, 00:13:18.021 "data_offset": 2048, 00:13:18.021 "data_size": 63488 00:13:18.021 } 00:13:18.021 ] 00:13:18.021 } 00:13:18.021 } 00:13:18.021 }' 00:13:18.021 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:18.022 pt2 00:13:18.022 pt3 00:13:18.022 pt4' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.022 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.281 22:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.281 [2024-09-27 22:30:14.008043] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6 ']' 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.281 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.281 [2024-09-27 22:30:14.051652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.282 [2024-09-27 22:30:14.051693] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.282 [2024-09-27 22:30:14.051780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.282 [2024-09-27 22:30:14.051853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.282 [2024-09-27 22:30:14.051872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.540 [2024-09-27 22:30:14.211535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:18.540 [2024-09-27 22:30:14.213874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:18.540 [2024-09-27 22:30:14.214114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:18.540 [2024-09-27 22:30:14.214165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:18.540 [2024-09-27 22:30:14.214235] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:18.540 [2024-09-27 22:30:14.214292] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:18.540 [2024-09-27 22:30:14.214316] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:18.540 [2024-09-27 22:30:14.214339] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:18.540 [2024-09-27 22:30:14.214356] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.540 [2024-09-27 22:30:14.214371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:18.540 request: 00:13:18.540 { 00:13:18.540 "name": "raid_bdev1", 00:13:18.540 "raid_level": "raid0", 00:13:18.540 "base_bdevs": [ 00:13:18.540 "malloc1", 00:13:18.540 "malloc2", 00:13:18.540 "malloc3", 00:13:18.540 "malloc4" 00:13:18.540 ], 00:13:18.540 "strip_size_kb": 64, 00:13:18.540 "superblock": false, 00:13:18.540 "method": "bdev_raid_create", 00:13:18.540 "req_id": 1 00:13:18.540 } 00:13:18.540 Got JSON-RPC error response 00:13:18.540 response: 00:13:18.540 { 00:13:18.540 "code": -17, 00:13:18.540 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:18.540 } 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.540 [2024-09-27 22:30:14.279524] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:18.540 [2024-09-27 22:30:14.279761] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.540 [2024-09-27 22:30:14.279794] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:18.540 [2024-09-27 22:30:14.279810] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.540 [2024-09-27 22:30:14.282425] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.540 [2024-09-27 22:30:14.282479] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:18.540 [2024-09-27 22:30:14.282575] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:18.540 [2024-09-27 22:30:14.282649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:18.540 pt1 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:18.540 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.541 "name": "raid_bdev1", 00:13:18.541 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:18.541 "strip_size_kb": 64, 00:13:18.541 "state": "configuring", 00:13:18.541 "raid_level": "raid0", 00:13:18.541 "superblock": true, 00:13:18.541 "num_base_bdevs": 4, 00:13:18.541 "num_base_bdevs_discovered": 1, 00:13:18.541 "num_base_bdevs_operational": 4, 00:13:18.541 "base_bdevs_list": [ 00:13:18.541 { 00:13:18.541 "name": "pt1", 00:13:18.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.541 "is_configured": true, 00:13:18.541 "data_offset": 2048, 00:13:18.541 "data_size": 63488 00:13:18.541 }, 00:13:18.541 { 00:13:18.541 "name": null, 00:13:18.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.541 "is_configured": false, 00:13:18.541 "data_offset": 2048, 00:13:18.541 "data_size": 63488 00:13:18.541 }, 00:13:18.541 { 00:13:18.541 "name": null, 00:13:18.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.541 "is_configured": false, 00:13:18.541 "data_offset": 2048, 00:13:18.541 "data_size": 63488 00:13:18.541 }, 00:13:18.541 { 00:13:18.541 "name": null, 00:13:18.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:18.541 "is_configured": false, 00:13:18.541 "data_offset": 2048, 00:13:18.541 "data_size": 63488 00:13:18.541 } 00:13:18.541 ] 00:13:18.541 }' 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.541 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.108 [2024-09-27 22:30:14.735539] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:19.108 [2024-09-27 22:30:14.735797] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.108 [2024-09-27 22:30:14.735830] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:19.108 [2024-09-27 22:30:14.735846] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.108 [2024-09-27 22:30:14.736376] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.108 [2024-09-27 22:30:14.736401] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:19.108 [2024-09-27 22:30:14.736489] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:19.108 [2024-09-27 22:30:14.736529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:19.108 pt2 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.108 [2024-09-27 22:30:14.747562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.108 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.108 "name": "raid_bdev1", 00:13:19.108 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:19.108 "strip_size_kb": 64, 00:13:19.108 "state": "configuring", 00:13:19.108 "raid_level": "raid0", 00:13:19.108 "superblock": true, 00:13:19.108 "num_base_bdevs": 4, 00:13:19.108 "num_base_bdevs_discovered": 1, 00:13:19.108 "num_base_bdevs_operational": 4, 00:13:19.108 "base_bdevs_list": [ 00:13:19.108 { 00:13:19.108 "name": "pt1", 00:13:19.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.108 "is_configured": true, 00:13:19.108 "data_offset": 2048, 00:13:19.108 "data_size": 63488 00:13:19.108 }, 00:13:19.108 { 00:13:19.108 "name": null, 00:13:19.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.108 "is_configured": false, 00:13:19.108 "data_offset": 0, 00:13:19.108 "data_size": 63488 00:13:19.108 }, 00:13:19.108 { 00:13:19.108 "name": null, 00:13:19.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.108 "is_configured": false, 00:13:19.108 "data_offset": 2048, 00:13:19.108 "data_size": 63488 00:13:19.109 }, 00:13:19.109 { 00:13:19.109 "name": null, 00:13:19.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:19.109 "is_configured": false, 00:13:19.109 "data_offset": 2048, 00:13:19.109 "data_size": 63488 00:13:19.109 } 00:13:19.109 ] 00:13:19.109 }' 00:13:19.109 22:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.109 22:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.368 [2024-09-27 22:30:15.167572] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:19.368 [2024-09-27 22:30:15.167657] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.368 [2024-09-27 22:30:15.167686] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:19.368 [2024-09-27 22:30:15.167706] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.368 [2024-09-27 22:30:15.168245] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.368 [2024-09-27 22:30:15.168268] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:19.368 [2024-09-27 22:30:15.168362] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:19.368 [2024-09-27 22:30:15.168386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:19.368 pt2 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.368 [2024-09-27 22:30:15.179526] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:19.368 [2024-09-27 22:30:15.179599] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.368 [2024-09-27 22:30:15.179626] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:19.368 [2024-09-27 22:30:15.179638] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.368 [2024-09-27 22:30:15.180135] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.368 [2024-09-27 22:30:15.180161] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:19.368 [2024-09-27 22:30:15.180258] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:19.368 [2024-09-27 22:30:15.180287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:19.368 pt3 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.368 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.368 [2024-09-27 22:30:15.191510] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:19.368 [2024-09-27 22:30:15.191583] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.368 [2024-09-27 22:30:15.191610] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:19.368 [2024-09-27 22:30:15.191622] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.368 [2024-09-27 22:30:15.192137] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.368 [2024-09-27 22:30:15.192166] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:19.368 [2024-09-27 22:30:15.192259] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:19.368 [2024-09-27 22:30:15.192283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:19.368 [2024-09-27 22:30:15.192427] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:19.368 [2024-09-27 22:30:15.192437] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:19.368 [2024-09-27 22:30:15.192699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:19.368 [2024-09-27 22:30:15.192855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:19.369 [2024-09-27 22:30:15.192878] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:19.369 [2024-09-27 22:30:15.193030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.369 pt4 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.369 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.627 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.627 "name": "raid_bdev1", 00:13:19.627 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:19.627 "strip_size_kb": 64, 00:13:19.627 "state": "online", 00:13:19.627 "raid_level": "raid0", 00:13:19.627 "superblock": true, 00:13:19.627 "num_base_bdevs": 4, 00:13:19.627 "num_base_bdevs_discovered": 4, 00:13:19.627 "num_base_bdevs_operational": 4, 00:13:19.627 "base_bdevs_list": [ 00:13:19.627 { 00:13:19.627 "name": "pt1", 00:13:19.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.627 "is_configured": true, 00:13:19.627 "data_offset": 2048, 00:13:19.627 "data_size": 63488 00:13:19.627 }, 00:13:19.627 { 00:13:19.627 "name": "pt2", 00:13:19.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.627 "is_configured": true, 00:13:19.627 "data_offset": 2048, 00:13:19.627 "data_size": 63488 00:13:19.627 }, 00:13:19.627 { 00:13:19.627 "name": "pt3", 00:13:19.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.627 "is_configured": true, 00:13:19.627 "data_offset": 2048, 00:13:19.627 "data_size": 63488 00:13:19.627 }, 00:13:19.627 { 00:13:19.627 "name": "pt4", 00:13:19.627 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:19.627 "is_configured": true, 00:13:19.627 "data_offset": 2048, 00:13:19.627 "data_size": 63488 00:13:19.627 } 00:13:19.627 ] 00:13:19.627 }' 00:13:19.627 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.627 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.886 [2024-09-27 22:30:15.659862] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.886 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.886 "name": "raid_bdev1", 00:13:19.886 "aliases": [ 00:13:19.886 "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6" 00:13:19.886 ], 00:13:19.886 "product_name": "Raid Volume", 00:13:19.886 "block_size": 512, 00:13:19.886 "num_blocks": 253952, 00:13:19.886 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:19.886 "assigned_rate_limits": { 00:13:19.886 "rw_ios_per_sec": 0, 00:13:19.886 "rw_mbytes_per_sec": 0, 00:13:19.886 "r_mbytes_per_sec": 0, 00:13:19.886 "w_mbytes_per_sec": 0 00:13:19.886 }, 00:13:19.886 "claimed": false, 00:13:19.886 "zoned": false, 00:13:19.886 "supported_io_types": { 00:13:19.886 "read": true, 00:13:19.886 "write": true, 00:13:19.886 "unmap": true, 00:13:19.886 "flush": true, 00:13:19.886 "reset": true, 00:13:19.886 "nvme_admin": false, 00:13:19.886 "nvme_io": false, 00:13:19.886 "nvme_io_md": false, 00:13:19.886 "write_zeroes": true, 00:13:19.886 "zcopy": false, 00:13:19.886 "get_zone_info": false, 00:13:19.886 "zone_management": false, 00:13:19.886 "zone_append": false, 00:13:19.886 "compare": false, 00:13:19.886 "compare_and_write": false, 00:13:19.886 "abort": false, 00:13:19.886 "seek_hole": false, 00:13:19.886 "seek_data": false, 00:13:19.886 "copy": false, 00:13:19.886 "nvme_iov_md": false 00:13:19.886 }, 00:13:19.886 "memory_domains": [ 00:13:19.886 { 00:13:19.886 "dma_device_id": "system", 00:13:19.886 "dma_device_type": 1 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.886 "dma_device_type": 2 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "system", 00:13:19.886 "dma_device_type": 1 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.886 "dma_device_type": 2 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "system", 00:13:19.886 "dma_device_type": 1 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.886 "dma_device_type": 2 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "system", 00:13:19.886 "dma_device_type": 1 00:13:19.886 }, 00:13:19.886 { 00:13:19.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.886 "dma_device_type": 2 00:13:19.886 } 00:13:19.886 ], 00:13:19.886 "driver_specific": { 00:13:19.886 "raid": { 00:13:19.886 "uuid": "ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6", 00:13:19.886 "strip_size_kb": 64, 00:13:19.886 "state": "online", 00:13:19.886 "raid_level": "raid0", 00:13:19.886 "superblock": true, 00:13:19.887 "num_base_bdevs": 4, 00:13:19.887 "num_base_bdevs_discovered": 4, 00:13:19.887 "num_base_bdevs_operational": 4, 00:13:19.887 "base_bdevs_list": [ 00:13:19.887 { 00:13:19.887 "name": "pt1", 00:13:19.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.887 "is_configured": true, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 }, 00:13:19.887 { 00:13:19.887 "name": "pt2", 00:13:19.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.887 "is_configured": true, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 }, 00:13:19.887 { 00:13:19.887 "name": "pt3", 00:13:19.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.887 "is_configured": true, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 }, 00:13:19.887 { 00:13:19.887 "name": "pt4", 00:13:19.887 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:19.887 "is_configured": true, 00:13:19.887 "data_offset": 2048, 00:13:19.887 "data_size": 63488 00:13:19.887 } 00:13:19.887 ] 00:13:19.887 } 00:13:19.887 } 00:13:19.887 }' 00:13:19.887 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.887 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:19.887 pt2 00:13:19.887 pt3 00:13:19.887 pt4' 00:13:19.887 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.146 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.147 22:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.147 [2024-09-27 22:30:15.987828] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.147 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6 '!=' ae25d40d-f9ea-4440-aa4d-71ac6a7f3ff6 ']' 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71406 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71406 ']' 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71406 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71406 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.407 killing process with pid 71406 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71406' 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 71406 00:13:20.407 22:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 71406 00:13:20.407 [2024-09-27 22:30:16.061027] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.407 [2024-09-27 22:30:16.061122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.407 [2024-09-27 22:30:16.061207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.407 [2024-09-27 22:30:16.061223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:20.671 [2024-09-27 22:30:16.506400] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.206 22:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:23.206 00:13:23.206 real 0m6.998s 00:13:23.206 user 0m9.391s 00:13:23.206 sys 0m1.249s 00:13:23.206 22:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.206 22:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.206 ************************************ 00:13:23.206 END TEST raid_superblock_test 00:13:23.206 ************************************ 00:13:23.206 22:30:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:23.206 22:30:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:23.206 22:30:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.206 22:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.206 ************************************ 00:13:23.206 START TEST raid_read_error_test 00:13:23.206 ************************************ 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:23.206 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5KeqCY2Od1 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71683 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71683 00:13:23.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71683 ']' 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.207 22:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.207 [2024-09-27 22:30:18.821462] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:13:23.207 [2024-09-27 22:30:18.821603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71683 ] 00:13:23.207 [2024-09-27 22:30:18.996733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.466 [2024-09-27 22:30:19.258396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.725 [2024-09-27 22:30:19.516117] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.725 [2024-09-27 22:30:19.516370] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.293 BaseBdev1_malloc 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.293 true 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.293 [2024-09-27 22:30:20.087281] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:24.293 [2024-09-27 22:30:20.087527] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.293 [2024-09-27 22:30:20.087588] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:24.293 [2024-09-27 22:30:20.087684] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.293 [2024-09-27 22:30:20.090469] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.293 [2024-09-27 22:30:20.090652] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.293 BaseBdev1 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.293 BaseBdev2_malloc 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.293 true 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.293 [2024-09-27 22:30:20.164277] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:24.293 [2024-09-27 22:30:20.164490] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.293 [2024-09-27 22:30:20.164558] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:24.293 [2024-09-27 22:30:20.164647] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.293 [2024-09-27 22:30:20.167394] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.293 [2024-09-27 22:30:20.167559] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.293 BaseBdev2 00:13:24.293 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 BaseBdev3_malloc 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 true 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 [2024-09-27 22:30:20.240777] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:24.552 [2024-09-27 22:30:20.240858] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.552 [2024-09-27 22:30:20.240884] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:24.552 [2024-09-27 22:30:20.240900] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.552 [2024-09-27 22:30:20.243646] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.552 [2024-09-27 22:30:20.243700] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:24.552 BaseBdev3 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 BaseBdev4_malloc 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 true 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 [2024-09-27 22:30:20.317867] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:24.552 [2024-09-27 22:30:20.318108] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.552 [2024-09-27 22:30:20.318145] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:24.552 [2024-09-27 22:30:20.318161] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.552 [2024-09-27 22:30:20.320871] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.552 [2024-09-27 22:30:20.320932] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:24.552 BaseBdev4 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 [2024-09-27 22:30:20.329975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.552 [2024-09-27 22:30:20.332563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.552 [2024-09-27 22:30:20.332815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.552 [2024-09-27 22:30:20.332927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.552 [2024-09-27 22:30:20.333334] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:24.552 [2024-09-27 22:30:20.333457] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:24.552 [2024-09-27 22:30:20.333815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:24.552 [2024-09-27 22:30:20.334051] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:24.552 [2024-09-27 22:30:20.334099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:24.552 [2024-09-27 22:30:20.334446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.552 "name": "raid_bdev1", 00:13:24.552 "uuid": "0db582c7-79ec-4607-acb1-47bccccbd56c", 00:13:24.552 "strip_size_kb": 64, 00:13:24.552 "state": "online", 00:13:24.552 "raid_level": "raid0", 00:13:24.552 "superblock": true, 00:13:24.552 "num_base_bdevs": 4, 00:13:24.552 "num_base_bdevs_discovered": 4, 00:13:24.552 "num_base_bdevs_operational": 4, 00:13:24.552 "base_bdevs_list": [ 00:13:24.552 { 00:13:24.552 "name": "BaseBdev1", 00:13:24.552 "uuid": "f0d6bb4b-da28-5feb-9970-f871db7ae458", 00:13:24.552 "is_configured": true, 00:13:24.552 "data_offset": 2048, 00:13:24.552 "data_size": 63488 00:13:24.552 }, 00:13:24.552 { 00:13:24.552 "name": "BaseBdev2", 00:13:24.552 "uuid": "c61430f6-26ae-579e-9484-b1191a54c663", 00:13:24.552 "is_configured": true, 00:13:24.552 "data_offset": 2048, 00:13:24.552 "data_size": 63488 00:13:24.552 }, 00:13:24.552 { 00:13:24.552 "name": "BaseBdev3", 00:13:24.552 "uuid": "dc7b8f45-f4a7-5441-a850-7ffdd6d7b0a3", 00:13:24.552 "is_configured": true, 00:13:24.552 "data_offset": 2048, 00:13:24.552 "data_size": 63488 00:13:24.552 }, 00:13:24.552 { 00:13:24.552 "name": "BaseBdev4", 00:13:24.552 "uuid": "18ddea01-2895-562e-85a5-29e15d5b3a82", 00:13:24.552 "is_configured": true, 00:13:24.552 "data_offset": 2048, 00:13:24.552 "data_size": 63488 00:13:24.552 } 00:13:24.552 ] 00:13:24.552 }' 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.552 22:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.118 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:25.118 22:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:25.118 [2024-09-27 22:30:20.915217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.053 "name": "raid_bdev1", 00:13:26.053 "uuid": "0db582c7-79ec-4607-acb1-47bccccbd56c", 00:13:26.053 "strip_size_kb": 64, 00:13:26.053 "state": "online", 00:13:26.053 "raid_level": "raid0", 00:13:26.053 "superblock": true, 00:13:26.053 "num_base_bdevs": 4, 00:13:26.053 "num_base_bdevs_discovered": 4, 00:13:26.053 "num_base_bdevs_operational": 4, 00:13:26.053 "base_bdevs_list": [ 00:13:26.053 { 00:13:26.053 "name": "BaseBdev1", 00:13:26.053 "uuid": "f0d6bb4b-da28-5feb-9970-f871db7ae458", 00:13:26.053 "is_configured": true, 00:13:26.053 "data_offset": 2048, 00:13:26.053 "data_size": 63488 00:13:26.053 }, 00:13:26.053 { 00:13:26.053 "name": "BaseBdev2", 00:13:26.053 "uuid": "c61430f6-26ae-579e-9484-b1191a54c663", 00:13:26.053 "is_configured": true, 00:13:26.053 "data_offset": 2048, 00:13:26.053 "data_size": 63488 00:13:26.053 }, 00:13:26.053 { 00:13:26.053 "name": "BaseBdev3", 00:13:26.053 "uuid": "dc7b8f45-f4a7-5441-a850-7ffdd6d7b0a3", 00:13:26.053 "is_configured": true, 00:13:26.053 "data_offset": 2048, 00:13:26.053 "data_size": 63488 00:13:26.053 }, 00:13:26.053 { 00:13:26.053 "name": "BaseBdev4", 00:13:26.053 "uuid": "18ddea01-2895-562e-85a5-29e15d5b3a82", 00:13:26.053 "is_configured": true, 00:13:26.053 "data_offset": 2048, 00:13:26.053 "data_size": 63488 00:13:26.053 } 00:13:26.053 ] 00:13:26.053 }' 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.053 22:30:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.620 [2024-09-27 22:30:22.232367] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.620 [2024-09-27 22:30:22.232409] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.620 [2024-09-27 22:30:22.235181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.620 [2024-09-27 22:30:22.235245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.620 [2024-09-27 22:30:22.235291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.620 [2024-09-27 22:30:22.235307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:26.620 { 00:13:26.620 "results": [ 00:13:26.620 { 00:13:26.620 "job": "raid_bdev1", 00:13:26.620 "core_mask": "0x1", 00:13:26.620 "workload": "randrw", 00:13:26.620 "percentage": 50, 00:13:26.620 "status": "finished", 00:13:26.620 "queue_depth": 1, 00:13:26.620 "io_size": 131072, 00:13:26.620 "runtime": 1.316657, 00:13:26.620 "iops": 14121.369498662141, 00:13:26.620 "mibps": 1765.1711873327677, 00:13:26.620 "io_failed": 1, 00:13:26.620 "io_timeout": 0, 00:13:26.620 "avg_latency_us": 97.92566501350136, 00:13:26.620 "min_latency_us": 29.60963855421687, 00:13:26.620 "max_latency_us": 1598.9204819277109 00:13:26.620 } 00:13:26.620 ], 00:13:26.620 "core_count": 1 00:13:26.620 } 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71683 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71683 ']' 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71683 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71683 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.620 killing process with pid 71683 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71683' 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71683 00:13:26.620 [2024-09-27 22:30:22.286585] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.620 22:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71683 00:13:26.878 [2024-09-27 22:30:22.644476] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5KeqCY2Od1 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:13:29.412 00:13:29.412 real 0m6.140s 00:13:29.412 user 0m6.940s 00:13:29.412 sys 0m0.767s 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.412 22:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.412 ************************************ 00:13:29.412 END TEST raid_read_error_test 00:13:29.412 ************************************ 00:13:29.412 22:30:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:29.412 22:30:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:29.412 22:30:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.412 22:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.412 ************************************ 00:13:29.412 START TEST raid_write_error_test 00:13:29.412 ************************************ 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L9PJeKdRaG 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71840 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71840 00:13:29.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71840 ']' 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.412 22:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.412 [2024-09-27 22:30:25.016768] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:13:29.412 [2024-09-27 22:30:25.017715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71840 ] 00:13:29.412 [2024-09-27 22:30:25.192596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.671 [2024-09-27 22:30:25.446529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.929 [2024-09-27 22:30:25.705117] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.929 [2024-09-27 22:30:25.705162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.495 BaseBdev1_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.495 true 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.495 [2024-09-27 22:30:26.265374] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:30.495 [2024-09-27 22:30:26.265453] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.495 [2024-09-27 22:30:26.265479] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:30.495 [2024-09-27 22:30:26.265494] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.495 [2024-09-27 22:30:26.268208] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.495 [2024-09-27 22:30:26.268258] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.495 BaseBdev1 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.495 BaseBdev2_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.495 true 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.495 [2024-09-27 22:30:26.341917] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:30.495 [2024-09-27 22:30:26.342011] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.495 [2024-09-27 22:30:26.342054] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:30.495 [2024-09-27 22:30:26.342070] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.495 [2024-09-27 22:30:26.344832] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.495 [2024-09-27 22:30:26.344891] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.495 BaseBdev2 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.495 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.753 BaseBdev3_malloc 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.753 true 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.753 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.753 [2024-09-27 22:30:26.418175] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:30.753 [2024-09-27 22:30:26.418522] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.753 [2024-09-27 22:30:26.418558] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:30.753 [2024-09-27 22:30:26.418574] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.753 [2024-09-27 22:30:26.421305] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.753 [2024-09-27 22:30:26.421351] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:30.753 BaseBdev3 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 BaseBdev4_malloc 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 true 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 [2024-09-27 22:30:26.495699] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:30.754 [2024-09-27 22:30:26.495785] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.754 [2024-09-27 22:30:26.495814] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:30.754 [2024-09-27 22:30:26.495830] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.754 [2024-09-27 22:30:26.498559] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.754 [2024-09-27 22:30:26.498616] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:30.754 BaseBdev4 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 [2024-09-27 22:30:26.507762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.754 [2024-09-27 22:30:26.510206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.754 [2024-09-27 22:30:26.510293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.754 [2024-09-27 22:30:26.510363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.754 [2024-09-27 22:30:26.510630] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:30.754 [2024-09-27 22:30:26.510647] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:30.754 [2024-09-27 22:30:26.510968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:30.754 [2024-09-27 22:30:26.511170] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:30.754 [2024-09-27 22:30:26.511182] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:30.754 [2024-09-27 22:30:26.511414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.754 "name": "raid_bdev1", 00:13:30.754 "uuid": "4c9502e2-b877-4933-9c03-e2106ee4f59a", 00:13:30.754 "strip_size_kb": 64, 00:13:30.754 "state": "online", 00:13:30.754 "raid_level": "raid0", 00:13:30.754 "superblock": true, 00:13:30.754 "num_base_bdevs": 4, 00:13:30.754 "num_base_bdevs_discovered": 4, 00:13:30.754 "num_base_bdevs_operational": 4, 00:13:30.754 "base_bdevs_list": [ 00:13:30.754 { 00:13:30.754 "name": "BaseBdev1", 00:13:30.754 "uuid": "97ad5337-0473-53cd-a801-647184aa5701", 00:13:30.754 "is_configured": true, 00:13:30.754 "data_offset": 2048, 00:13:30.754 "data_size": 63488 00:13:30.754 }, 00:13:30.754 { 00:13:30.754 "name": "BaseBdev2", 00:13:30.754 "uuid": "b843ed7d-3190-5d6f-9558-66717f93c4b4", 00:13:30.754 "is_configured": true, 00:13:30.754 "data_offset": 2048, 00:13:30.754 "data_size": 63488 00:13:30.754 }, 00:13:30.754 { 00:13:30.754 "name": "BaseBdev3", 00:13:30.754 "uuid": "063e890f-9a29-54f5-8b4d-18f08b444d82", 00:13:30.754 "is_configured": true, 00:13:30.754 "data_offset": 2048, 00:13:30.754 "data_size": 63488 00:13:30.754 }, 00:13:30.754 { 00:13:30.754 "name": "BaseBdev4", 00:13:30.754 "uuid": "868570b4-f6bf-5a55-8be3-eedaede8bbe0", 00:13:30.754 "is_configured": true, 00:13:30.754 "data_offset": 2048, 00:13:30.754 "data_size": 63488 00:13:30.754 } 00:13:30.754 ] 00:13:30.754 }' 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.754 22:30:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.320 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:31.320 22:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.320 [2024-09-27 22:30:27.065165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.257 22:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.257 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.257 22:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.257 "name": "raid_bdev1", 00:13:32.257 "uuid": "4c9502e2-b877-4933-9c03-e2106ee4f59a", 00:13:32.257 "strip_size_kb": 64, 00:13:32.257 "state": "online", 00:13:32.257 "raid_level": "raid0", 00:13:32.257 "superblock": true, 00:13:32.257 "num_base_bdevs": 4, 00:13:32.257 "num_base_bdevs_discovered": 4, 00:13:32.257 "num_base_bdevs_operational": 4, 00:13:32.257 "base_bdevs_list": [ 00:13:32.257 { 00:13:32.257 "name": "BaseBdev1", 00:13:32.257 "uuid": "97ad5337-0473-53cd-a801-647184aa5701", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev2", 00:13:32.257 "uuid": "b843ed7d-3190-5d6f-9558-66717f93c4b4", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev3", 00:13:32.257 "uuid": "063e890f-9a29-54f5-8b4d-18f08b444d82", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "name": "BaseBdev4", 00:13:32.257 "uuid": "868570b4-f6bf-5a55-8be3-eedaede8bbe0", 00:13:32.257 "is_configured": true, 00:13:32.257 "data_offset": 2048, 00:13:32.257 "data_size": 63488 00:13:32.257 } 00:13:32.257 ] 00:13:32.257 }' 00:13:32.257 22:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.257 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.825 [2024-09-27 22:30:28.430309] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:32.825 [2024-09-27 22:30:28.430349] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.825 [2024-09-27 22:30:28.433216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.825 [2024-09-27 22:30:28.433286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.825 [2024-09-27 22:30:28.433334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.825 [2024-09-27 22:30:28.433350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:32.825 { 00:13:32.825 "results": [ 00:13:32.825 { 00:13:32.825 "job": "raid_bdev1", 00:13:32.825 "core_mask": "0x1", 00:13:32.825 "workload": "randrw", 00:13:32.825 "percentage": 50, 00:13:32.825 "status": "finished", 00:13:32.825 "queue_depth": 1, 00:13:32.825 "io_size": 131072, 00:13:32.825 "runtime": 1.365038, 00:13:32.825 "iops": 14390.80816797774, 00:13:32.825 "mibps": 1798.8510209972176, 00:13:32.825 "io_failed": 1, 00:13:32.825 "io_timeout": 0, 00:13:32.825 "avg_latency_us": 96.01484338984854, 00:13:32.825 "min_latency_us": 30.020883534136548, 00:13:32.825 "max_latency_us": 1559.4409638554216 00:13:32.825 } 00:13:32.825 ], 00:13:32.825 "core_count": 1 00:13:32.825 } 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71840 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71840 ']' 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71840 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71840 00:13:32.825 killing process with pid 71840 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71840' 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71840 00:13:32.825 22:30:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71840 00:13:32.825 [2024-09-27 22:30:28.483924] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.084 [2024-09-27 22:30:28.843546] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L9PJeKdRaG 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:13:35.616 00:13:35.616 real 0m6.155s 00:13:35.616 user 0m7.008s 00:13:35.616 sys 0m0.756s 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.616 22:30:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 ************************************ 00:13:35.616 END TEST raid_write_error_test 00:13:35.616 ************************************ 00:13:35.616 22:30:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:35.616 22:30:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:35.616 22:30:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:35.616 22:30:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.616 22:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 ************************************ 00:13:35.616 START TEST raid_state_function_test 00:13:35.616 ************************************ 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:35.616 Process raid pid: 72000 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72000 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72000' 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72000 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72000 ']' 00:13:35.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.616 22:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.616 [2024-09-27 22:30:31.247097] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:13:35.616 [2024-09-27 22:30:31.247260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.616 [2024-09-27 22:30:31.414465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.875 [2024-09-27 22:30:31.670658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.133 [2024-09-27 22:30:31.934015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.133 [2024-09-27 22:30:31.934279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.699 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.699 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:36.699 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:36.699 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.699 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.699 [2024-09-27 22:30:32.449724] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:36.699 [2024-09-27 22:30:32.449798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:36.699 [2024-09-27 22:30:32.449810] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.699 [2024-09-27 22:30:32.449824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.699 [2024-09-27 22:30:32.449833] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:36.699 [2024-09-27 22:30:32.449848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:36.699 [2024-09-27 22:30:32.449857] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:36.699 [2024-09-27 22:30:32.449870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:36.699 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.700 "name": "Existed_Raid", 00:13:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.700 "strip_size_kb": 64, 00:13:36.700 "state": "configuring", 00:13:36.700 "raid_level": "concat", 00:13:36.700 "superblock": false, 00:13:36.700 "num_base_bdevs": 4, 00:13:36.700 "num_base_bdevs_discovered": 0, 00:13:36.700 "num_base_bdevs_operational": 4, 00:13:36.700 "base_bdevs_list": [ 00:13:36.700 { 00:13:36.700 "name": "BaseBdev1", 00:13:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.700 "is_configured": false, 00:13:36.700 "data_offset": 0, 00:13:36.700 "data_size": 0 00:13:36.700 }, 00:13:36.700 { 00:13:36.700 "name": "BaseBdev2", 00:13:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.700 "is_configured": false, 00:13:36.700 "data_offset": 0, 00:13:36.700 "data_size": 0 00:13:36.700 }, 00:13:36.700 { 00:13:36.700 "name": "BaseBdev3", 00:13:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.700 "is_configured": false, 00:13:36.700 "data_offset": 0, 00:13:36.700 "data_size": 0 00:13:36.700 }, 00:13:36.700 { 00:13:36.700 "name": "BaseBdev4", 00:13:36.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.700 "is_configured": false, 00:13:36.700 "data_offset": 0, 00:13:36.700 "data_size": 0 00:13:36.700 } 00:13:36.700 ] 00:13:36.700 }' 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.700 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.265 [2024-09-27 22:30:32.881069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.265 [2024-09-27 22:30:32.881120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.265 [2024-09-27 22:30:32.889081] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:37.265 [2024-09-27 22:30:32.889288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:37.265 [2024-09-27 22:30:32.889396] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.265 [2024-09-27 22:30:32.889501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.265 [2024-09-27 22:30:32.889579] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:37.265 [2024-09-27 22:30:32.889705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:37.265 [2024-09-27 22:30:32.889784] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:37.265 [2024-09-27 22:30:32.889828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.265 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.265 [2024-09-27 22:30:32.943211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.265 BaseBdev1 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 [ 00:13:37.266 { 00:13:37.266 "name": "BaseBdev1", 00:13:37.266 "aliases": [ 00:13:37.266 "f41742d6-1454-4548-9476-aa2d9ea55cf5" 00:13:37.266 ], 00:13:37.266 "product_name": "Malloc disk", 00:13:37.266 "block_size": 512, 00:13:37.266 "num_blocks": 65536, 00:13:37.266 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:37.266 "assigned_rate_limits": { 00:13:37.266 "rw_ios_per_sec": 0, 00:13:37.266 "rw_mbytes_per_sec": 0, 00:13:37.266 "r_mbytes_per_sec": 0, 00:13:37.266 "w_mbytes_per_sec": 0 00:13:37.266 }, 00:13:37.266 "claimed": true, 00:13:37.266 "claim_type": "exclusive_write", 00:13:37.266 "zoned": false, 00:13:37.266 "supported_io_types": { 00:13:37.266 "read": true, 00:13:37.266 "write": true, 00:13:37.266 "unmap": true, 00:13:37.266 "flush": true, 00:13:37.266 "reset": true, 00:13:37.266 "nvme_admin": false, 00:13:37.266 "nvme_io": false, 00:13:37.266 "nvme_io_md": false, 00:13:37.266 "write_zeroes": true, 00:13:37.266 "zcopy": true, 00:13:37.266 "get_zone_info": false, 00:13:37.266 "zone_management": false, 00:13:37.266 "zone_append": false, 00:13:37.266 "compare": false, 00:13:37.266 "compare_and_write": false, 00:13:37.266 "abort": true, 00:13:37.266 "seek_hole": false, 00:13:37.266 "seek_data": false, 00:13:37.266 "copy": true, 00:13:37.266 "nvme_iov_md": false 00:13:37.266 }, 00:13:37.266 "memory_domains": [ 00:13:37.266 { 00:13:37.266 "dma_device_id": "system", 00:13:37.266 "dma_device_type": 1 00:13:37.266 }, 00:13:37.266 { 00:13:37.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.266 "dma_device_type": 2 00:13:37.266 } 00:13:37.266 ], 00:13:37.266 "driver_specific": {} 00:13:37.266 } 00:13:37.266 ] 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.266 22:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.266 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.266 "name": "Existed_Raid", 00:13:37.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.266 "strip_size_kb": 64, 00:13:37.266 "state": "configuring", 00:13:37.266 "raid_level": "concat", 00:13:37.266 "superblock": false, 00:13:37.266 "num_base_bdevs": 4, 00:13:37.266 "num_base_bdevs_discovered": 1, 00:13:37.266 "num_base_bdevs_operational": 4, 00:13:37.266 "base_bdevs_list": [ 00:13:37.266 { 00:13:37.266 "name": "BaseBdev1", 00:13:37.266 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:37.266 "is_configured": true, 00:13:37.266 "data_offset": 0, 00:13:37.266 "data_size": 65536 00:13:37.266 }, 00:13:37.266 { 00:13:37.266 "name": "BaseBdev2", 00:13:37.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.266 "is_configured": false, 00:13:37.266 "data_offset": 0, 00:13:37.266 "data_size": 0 00:13:37.266 }, 00:13:37.266 { 00:13:37.266 "name": "BaseBdev3", 00:13:37.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.266 "is_configured": false, 00:13:37.266 "data_offset": 0, 00:13:37.266 "data_size": 0 00:13:37.266 }, 00:13:37.266 { 00:13:37.266 "name": "BaseBdev4", 00:13:37.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.266 "is_configured": false, 00:13:37.266 "data_offset": 0, 00:13:37.266 "data_size": 0 00:13:37.266 } 00:13:37.266 ] 00:13:37.266 }' 00:13:37.266 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.266 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:37.523 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.524 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.524 [2024-09-27 22:30:33.390818] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.524 [2024-09-27 22:30:33.390880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:37.524 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.524 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:37.524 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.524 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.524 [2024-09-27 22:30:33.398882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.820 [2024-09-27 22:30:33.401402] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.820 [2024-09-27 22:30:33.401595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.820 [2024-09-27 22:30:33.401688] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:37.820 [2024-09-27 22:30:33.401716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:37.820 [2024-09-27 22:30:33.401726] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:37.820 [2024-09-27 22:30:33.401738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.820 "name": "Existed_Raid", 00:13:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.820 "strip_size_kb": 64, 00:13:37.820 "state": "configuring", 00:13:37.820 "raid_level": "concat", 00:13:37.820 "superblock": false, 00:13:37.820 "num_base_bdevs": 4, 00:13:37.820 "num_base_bdevs_discovered": 1, 00:13:37.820 "num_base_bdevs_operational": 4, 00:13:37.820 "base_bdevs_list": [ 00:13:37.820 { 00:13:37.820 "name": "BaseBdev1", 00:13:37.820 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:37.820 "is_configured": true, 00:13:37.820 "data_offset": 0, 00:13:37.820 "data_size": 65536 00:13:37.820 }, 00:13:37.820 { 00:13:37.820 "name": "BaseBdev2", 00:13:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.820 "is_configured": false, 00:13:37.820 "data_offset": 0, 00:13:37.820 "data_size": 0 00:13:37.820 }, 00:13:37.820 { 00:13:37.820 "name": "BaseBdev3", 00:13:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.820 "is_configured": false, 00:13:37.820 "data_offset": 0, 00:13:37.820 "data_size": 0 00:13:37.820 }, 00:13:37.820 { 00:13:37.820 "name": "BaseBdev4", 00:13:37.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.820 "is_configured": false, 00:13:37.820 "data_offset": 0, 00:13:37.820 "data_size": 0 00:13:37.820 } 00:13:37.820 ] 00:13:37.820 }' 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.820 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.078 [2024-09-27 22:30:33.913157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.078 BaseBdev2 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.078 [ 00:13:38.078 { 00:13:38.078 "name": "BaseBdev2", 00:13:38.078 "aliases": [ 00:13:38.078 "551f9eb1-eda0-4262-af43-8b138101fcd8" 00:13:38.078 ], 00:13:38.078 "product_name": "Malloc disk", 00:13:38.078 "block_size": 512, 00:13:38.078 "num_blocks": 65536, 00:13:38.078 "uuid": "551f9eb1-eda0-4262-af43-8b138101fcd8", 00:13:38.078 "assigned_rate_limits": { 00:13:38.078 "rw_ios_per_sec": 0, 00:13:38.078 "rw_mbytes_per_sec": 0, 00:13:38.078 "r_mbytes_per_sec": 0, 00:13:38.078 "w_mbytes_per_sec": 0 00:13:38.078 }, 00:13:38.078 "claimed": true, 00:13:38.078 "claim_type": "exclusive_write", 00:13:38.078 "zoned": false, 00:13:38.078 "supported_io_types": { 00:13:38.078 "read": true, 00:13:38.078 "write": true, 00:13:38.078 "unmap": true, 00:13:38.078 "flush": true, 00:13:38.078 "reset": true, 00:13:38.078 "nvme_admin": false, 00:13:38.078 "nvme_io": false, 00:13:38.078 "nvme_io_md": false, 00:13:38.078 "write_zeroes": true, 00:13:38.078 "zcopy": true, 00:13:38.078 "get_zone_info": false, 00:13:38.078 "zone_management": false, 00:13:38.078 "zone_append": false, 00:13:38.078 "compare": false, 00:13:38.078 "compare_and_write": false, 00:13:38.078 "abort": true, 00:13:38.078 "seek_hole": false, 00:13:38.078 "seek_data": false, 00:13:38.078 "copy": true, 00:13:38.078 "nvme_iov_md": false 00:13:38.078 }, 00:13:38.078 "memory_domains": [ 00:13:38.078 { 00:13:38.078 "dma_device_id": "system", 00:13:38.078 "dma_device_type": 1 00:13:38.078 }, 00:13:38.078 { 00:13:38.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.078 "dma_device_type": 2 00:13:38.078 } 00:13:38.078 ], 00:13:38.078 "driver_specific": {} 00:13:38.078 } 00:13:38.078 ] 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.078 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.337 "name": "Existed_Raid", 00:13:38.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.337 "strip_size_kb": 64, 00:13:38.337 "state": "configuring", 00:13:38.337 "raid_level": "concat", 00:13:38.337 "superblock": false, 00:13:38.337 "num_base_bdevs": 4, 00:13:38.337 "num_base_bdevs_discovered": 2, 00:13:38.337 "num_base_bdevs_operational": 4, 00:13:38.337 "base_bdevs_list": [ 00:13:38.337 { 00:13:38.337 "name": "BaseBdev1", 00:13:38.337 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:38.337 "is_configured": true, 00:13:38.337 "data_offset": 0, 00:13:38.337 "data_size": 65536 00:13:38.337 }, 00:13:38.337 { 00:13:38.337 "name": "BaseBdev2", 00:13:38.337 "uuid": "551f9eb1-eda0-4262-af43-8b138101fcd8", 00:13:38.337 "is_configured": true, 00:13:38.337 "data_offset": 0, 00:13:38.337 "data_size": 65536 00:13:38.337 }, 00:13:38.337 { 00:13:38.337 "name": "BaseBdev3", 00:13:38.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.337 "is_configured": false, 00:13:38.337 "data_offset": 0, 00:13:38.337 "data_size": 0 00:13:38.337 }, 00:13:38.337 { 00:13:38.337 "name": "BaseBdev4", 00:13:38.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.337 "is_configured": false, 00:13:38.337 "data_offset": 0, 00:13:38.337 "data_size": 0 00:13:38.337 } 00:13:38.337 ] 00:13:38.337 }' 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.337 22:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.595 [2024-09-27 22:30:34.437384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.595 BaseBdev3 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.595 [ 00:13:38.595 { 00:13:38.595 "name": "BaseBdev3", 00:13:38.595 "aliases": [ 00:13:38.595 "ad48f1d3-5b8d-4e5b-9be0-9e42b417c4f0" 00:13:38.595 ], 00:13:38.595 "product_name": "Malloc disk", 00:13:38.595 "block_size": 512, 00:13:38.595 "num_blocks": 65536, 00:13:38.595 "uuid": "ad48f1d3-5b8d-4e5b-9be0-9e42b417c4f0", 00:13:38.595 "assigned_rate_limits": { 00:13:38.595 "rw_ios_per_sec": 0, 00:13:38.595 "rw_mbytes_per_sec": 0, 00:13:38.595 "r_mbytes_per_sec": 0, 00:13:38.595 "w_mbytes_per_sec": 0 00:13:38.595 }, 00:13:38.595 "claimed": true, 00:13:38.595 "claim_type": "exclusive_write", 00:13:38.595 "zoned": false, 00:13:38.595 "supported_io_types": { 00:13:38.595 "read": true, 00:13:38.595 "write": true, 00:13:38.595 "unmap": true, 00:13:38.595 "flush": true, 00:13:38.595 "reset": true, 00:13:38.595 "nvme_admin": false, 00:13:38.595 "nvme_io": false, 00:13:38.595 "nvme_io_md": false, 00:13:38.595 "write_zeroes": true, 00:13:38.595 "zcopy": true, 00:13:38.595 "get_zone_info": false, 00:13:38.595 "zone_management": false, 00:13:38.595 "zone_append": false, 00:13:38.595 "compare": false, 00:13:38.595 "compare_and_write": false, 00:13:38.595 "abort": true, 00:13:38.595 "seek_hole": false, 00:13:38.595 "seek_data": false, 00:13:38.595 "copy": true, 00:13:38.595 "nvme_iov_md": false 00:13:38.595 }, 00:13:38.595 "memory_domains": [ 00:13:38.595 { 00:13:38.595 "dma_device_id": "system", 00:13:38.595 "dma_device_type": 1 00:13:38.595 }, 00:13:38.595 { 00:13:38.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.595 "dma_device_type": 2 00:13:38.595 } 00:13:38.595 ], 00:13:38.595 "driver_specific": {} 00:13:38.595 } 00:13:38.595 ] 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.595 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.854 "name": "Existed_Raid", 00:13:38.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.854 "strip_size_kb": 64, 00:13:38.854 "state": "configuring", 00:13:38.854 "raid_level": "concat", 00:13:38.854 "superblock": false, 00:13:38.854 "num_base_bdevs": 4, 00:13:38.854 "num_base_bdevs_discovered": 3, 00:13:38.854 "num_base_bdevs_operational": 4, 00:13:38.854 "base_bdevs_list": [ 00:13:38.854 { 00:13:38.854 "name": "BaseBdev1", 00:13:38.854 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:38.854 "is_configured": true, 00:13:38.854 "data_offset": 0, 00:13:38.854 "data_size": 65536 00:13:38.854 }, 00:13:38.854 { 00:13:38.854 "name": "BaseBdev2", 00:13:38.854 "uuid": "551f9eb1-eda0-4262-af43-8b138101fcd8", 00:13:38.854 "is_configured": true, 00:13:38.854 "data_offset": 0, 00:13:38.854 "data_size": 65536 00:13:38.854 }, 00:13:38.854 { 00:13:38.854 "name": "BaseBdev3", 00:13:38.854 "uuid": "ad48f1d3-5b8d-4e5b-9be0-9e42b417c4f0", 00:13:38.854 "is_configured": true, 00:13:38.854 "data_offset": 0, 00:13:38.854 "data_size": 65536 00:13:38.854 }, 00:13:38.854 { 00:13:38.854 "name": "BaseBdev4", 00:13:38.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.854 "is_configured": false, 00:13:38.854 "data_offset": 0, 00:13:38.854 "data_size": 0 00:13:38.854 } 00:13:38.854 ] 00:13:38.854 }' 00:13:38.854 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.855 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.113 [2024-09-27 22:30:34.931348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:39.113 [2024-09-27 22:30:34.931424] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:39.113 [2024-09-27 22:30:34.931434] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:39.113 [2024-09-27 22:30:34.931743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:39.113 [2024-09-27 22:30:34.931921] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:39.113 [2024-09-27 22:30:34.931937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:39.113 [2024-09-27 22:30:34.932520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.113 BaseBdev4 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.113 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.113 [ 00:13:39.113 { 00:13:39.113 "name": "BaseBdev4", 00:13:39.113 "aliases": [ 00:13:39.113 "81bb0d9b-f168-4ec9-aa50-d93a376abf76" 00:13:39.113 ], 00:13:39.113 "product_name": "Malloc disk", 00:13:39.113 "block_size": 512, 00:13:39.113 "num_blocks": 65536, 00:13:39.113 "uuid": "81bb0d9b-f168-4ec9-aa50-d93a376abf76", 00:13:39.113 "assigned_rate_limits": { 00:13:39.113 "rw_ios_per_sec": 0, 00:13:39.113 "rw_mbytes_per_sec": 0, 00:13:39.113 "r_mbytes_per_sec": 0, 00:13:39.113 "w_mbytes_per_sec": 0 00:13:39.113 }, 00:13:39.113 "claimed": true, 00:13:39.113 "claim_type": "exclusive_write", 00:13:39.113 "zoned": false, 00:13:39.113 "supported_io_types": { 00:13:39.113 "read": true, 00:13:39.113 "write": true, 00:13:39.113 "unmap": true, 00:13:39.113 "flush": true, 00:13:39.113 "reset": true, 00:13:39.113 "nvme_admin": false, 00:13:39.113 "nvme_io": false, 00:13:39.113 "nvme_io_md": false, 00:13:39.113 "write_zeroes": true, 00:13:39.113 "zcopy": true, 00:13:39.113 "get_zone_info": false, 00:13:39.113 "zone_management": false, 00:13:39.113 "zone_append": false, 00:13:39.113 "compare": false, 00:13:39.113 "compare_and_write": false, 00:13:39.113 "abort": true, 00:13:39.113 "seek_hole": false, 00:13:39.113 "seek_data": false, 00:13:39.113 "copy": true, 00:13:39.114 "nvme_iov_md": false 00:13:39.114 }, 00:13:39.114 "memory_domains": [ 00:13:39.114 { 00:13:39.114 "dma_device_id": "system", 00:13:39.114 "dma_device_type": 1 00:13:39.114 }, 00:13:39.114 { 00:13:39.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.114 "dma_device_type": 2 00:13:39.114 } 00:13:39.114 ], 00:13:39.114 "driver_specific": {} 00:13:39.114 } 00:13:39.114 ] 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.114 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.372 22:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.372 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.372 "name": "Existed_Raid", 00:13:39.372 "uuid": "f3492c73-f00b-411a-8858-e878119ecd89", 00:13:39.372 "strip_size_kb": 64, 00:13:39.372 "state": "online", 00:13:39.372 "raid_level": "concat", 00:13:39.372 "superblock": false, 00:13:39.372 "num_base_bdevs": 4, 00:13:39.372 "num_base_bdevs_discovered": 4, 00:13:39.372 "num_base_bdevs_operational": 4, 00:13:39.372 "base_bdevs_list": [ 00:13:39.372 { 00:13:39.372 "name": "BaseBdev1", 00:13:39.372 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:39.372 "is_configured": true, 00:13:39.372 "data_offset": 0, 00:13:39.372 "data_size": 65536 00:13:39.372 }, 00:13:39.372 { 00:13:39.372 "name": "BaseBdev2", 00:13:39.372 "uuid": "551f9eb1-eda0-4262-af43-8b138101fcd8", 00:13:39.372 "is_configured": true, 00:13:39.372 "data_offset": 0, 00:13:39.372 "data_size": 65536 00:13:39.372 }, 00:13:39.372 { 00:13:39.372 "name": "BaseBdev3", 00:13:39.372 "uuid": "ad48f1d3-5b8d-4e5b-9be0-9e42b417c4f0", 00:13:39.372 "is_configured": true, 00:13:39.372 "data_offset": 0, 00:13:39.372 "data_size": 65536 00:13:39.372 }, 00:13:39.372 { 00:13:39.372 "name": "BaseBdev4", 00:13:39.372 "uuid": "81bb0d9b-f168-4ec9-aa50-d93a376abf76", 00:13:39.372 "is_configured": true, 00:13:39.372 "data_offset": 0, 00:13:39.372 "data_size": 65536 00:13:39.372 } 00:13:39.372 ] 00:13:39.372 }' 00:13:39.372 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.372 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.630 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.630 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.631 [2024-09-27 22:30:35.391224] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.631 "name": "Existed_Raid", 00:13:39.631 "aliases": [ 00:13:39.631 "f3492c73-f00b-411a-8858-e878119ecd89" 00:13:39.631 ], 00:13:39.631 "product_name": "Raid Volume", 00:13:39.631 "block_size": 512, 00:13:39.631 "num_blocks": 262144, 00:13:39.631 "uuid": "f3492c73-f00b-411a-8858-e878119ecd89", 00:13:39.631 "assigned_rate_limits": { 00:13:39.631 "rw_ios_per_sec": 0, 00:13:39.631 "rw_mbytes_per_sec": 0, 00:13:39.631 "r_mbytes_per_sec": 0, 00:13:39.631 "w_mbytes_per_sec": 0 00:13:39.631 }, 00:13:39.631 "claimed": false, 00:13:39.631 "zoned": false, 00:13:39.631 "supported_io_types": { 00:13:39.631 "read": true, 00:13:39.631 "write": true, 00:13:39.631 "unmap": true, 00:13:39.631 "flush": true, 00:13:39.631 "reset": true, 00:13:39.631 "nvme_admin": false, 00:13:39.631 "nvme_io": false, 00:13:39.631 "nvme_io_md": false, 00:13:39.631 "write_zeroes": true, 00:13:39.631 "zcopy": false, 00:13:39.631 "get_zone_info": false, 00:13:39.631 "zone_management": false, 00:13:39.631 "zone_append": false, 00:13:39.631 "compare": false, 00:13:39.631 "compare_and_write": false, 00:13:39.631 "abort": false, 00:13:39.631 "seek_hole": false, 00:13:39.631 "seek_data": false, 00:13:39.631 "copy": false, 00:13:39.631 "nvme_iov_md": false 00:13:39.631 }, 00:13:39.631 "memory_domains": [ 00:13:39.631 { 00:13:39.631 "dma_device_id": "system", 00:13:39.631 "dma_device_type": 1 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.631 "dma_device_type": 2 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "system", 00:13:39.631 "dma_device_type": 1 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.631 "dma_device_type": 2 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "system", 00:13:39.631 "dma_device_type": 1 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.631 "dma_device_type": 2 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "system", 00:13:39.631 "dma_device_type": 1 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.631 "dma_device_type": 2 00:13:39.631 } 00:13:39.631 ], 00:13:39.631 "driver_specific": { 00:13:39.631 "raid": { 00:13:39.631 "uuid": "f3492c73-f00b-411a-8858-e878119ecd89", 00:13:39.631 "strip_size_kb": 64, 00:13:39.631 "state": "online", 00:13:39.631 "raid_level": "concat", 00:13:39.631 "superblock": false, 00:13:39.631 "num_base_bdevs": 4, 00:13:39.631 "num_base_bdevs_discovered": 4, 00:13:39.631 "num_base_bdevs_operational": 4, 00:13:39.631 "base_bdevs_list": [ 00:13:39.631 { 00:13:39.631 "name": "BaseBdev1", 00:13:39.631 "uuid": "f41742d6-1454-4548-9476-aa2d9ea55cf5", 00:13:39.631 "is_configured": true, 00:13:39.631 "data_offset": 0, 00:13:39.631 "data_size": 65536 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "name": "BaseBdev2", 00:13:39.631 "uuid": "551f9eb1-eda0-4262-af43-8b138101fcd8", 00:13:39.631 "is_configured": true, 00:13:39.631 "data_offset": 0, 00:13:39.631 "data_size": 65536 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "name": "BaseBdev3", 00:13:39.631 "uuid": "ad48f1d3-5b8d-4e5b-9be0-9e42b417c4f0", 00:13:39.631 "is_configured": true, 00:13:39.631 "data_offset": 0, 00:13:39.631 "data_size": 65536 00:13:39.631 }, 00:13:39.631 { 00:13:39.631 "name": "BaseBdev4", 00:13:39.631 "uuid": "81bb0d9b-f168-4ec9-aa50-d93a376abf76", 00:13:39.631 "is_configured": true, 00:13:39.631 "data_offset": 0, 00:13:39.631 "data_size": 65536 00:13:39.631 } 00:13:39.631 ] 00:13:39.631 } 00:13:39.631 } 00:13:39.631 }' 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:39.631 BaseBdev2 00:13:39.631 BaseBdev3 00:13:39.631 BaseBdev4' 00:13:39.631 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.889 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.890 [2024-09-27 22:30:35.710470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.890 [2024-09-27 22:30:35.710508] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.890 [2024-09-27 22:30:35.710566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.149 "name": "Existed_Raid", 00:13:40.149 "uuid": "f3492c73-f00b-411a-8858-e878119ecd89", 00:13:40.149 "strip_size_kb": 64, 00:13:40.149 "state": "offline", 00:13:40.149 "raid_level": "concat", 00:13:40.149 "superblock": false, 00:13:40.149 "num_base_bdevs": 4, 00:13:40.149 "num_base_bdevs_discovered": 3, 00:13:40.149 "num_base_bdevs_operational": 3, 00:13:40.149 "base_bdevs_list": [ 00:13:40.149 { 00:13:40.149 "name": null, 00:13:40.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.149 "is_configured": false, 00:13:40.149 "data_offset": 0, 00:13:40.149 "data_size": 65536 00:13:40.149 }, 00:13:40.149 { 00:13:40.149 "name": "BaseBdev2", 00:13:40.149 "uuid": "551f9eb1-eda0-4262-af43-8b138101fcd8", 00:13:40.149 "is_configured": true, 00:13:40.149 "data_offset": 0, 00:13:40.149 "data_size": 65536 00:13:40.149 }, 00:13:40.149 { 00:13:40.149 "name": "BaseBdev3", 00:13:40.149 "uuid": "ad48f1d3-5b8d-4e5b-9be0-9e42b417c4f0", 00:13:40.149 "is_configured": true, 00:13:40.149 "data_offset": 0, 00:13:40.149 "data_size": 65536 00:13:40.149 }, 00:13:40.149 { 00:13:40.149 "name": "BaseBdev4", 00:13:40.149 "uuid": "81bb0d9b-f168-4ec9-aa50-d93a376abf76", 00:13:40.149 "is_configured": true, 00:13:40.149 "data_offset": 0, 00:13:40.149 "data_size": 65536 00:13:40.149 } 00:13:40.149 ] 00:13:40.149 }' 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.149 22:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.408 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:40.408 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.408 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.408 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.408 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:40.408 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.666 [2024-09-27 22:30:36.312332] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.666 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.666 [2024-09-27 22:30:36.474798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.925 [2024-09-27 22:30:36.635841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:40.925 [2024-09-27 22:30:36.635896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.925 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.184 BaseBdev2 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.184 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.184 [ 00:13:41.184 { 00:13:41.184 "name": "BaseBdev2", 00:13:41.184 "aliases": [ 00:13:41.184 "7541e91e-d172-4f2e-93d0-517dcddc7078" 00:13:41.184 ], 00:13:41.184 "product_name": "Malloc disk", 00:13:41.184 "block_size": 512, 00:13:41.184 "num_blocks": 65536, 00:13:41.184 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:41.184 "assigned_rate_limits": { 00:13:41.184 "rw_ios_per_sec": 0, 00:13:41.184 "rw_mbytes_per_sec": 0, 00:13:41.184 "r_mbytes_per_sec": 0, 00:13:41.184 "w_mbytes_per_sec": 0 00:13:41.184 }, 00:13:41.184 "claimed": false, 00:13:41.184 "zoned": false, 00:13:41.184 "supported_io_types": { 00:13:41.184 "read": true, 00:13:41.184 "write": true, 00:13:41.184 "unmap": true, 00:13:41.184 "flush": true, 00:13:41.184 "reset": true, 00:13:41.184 "nvme_admin": false, 00:13:41.184 "nvme_io": false, 00:13:41.184 "nvme_io_md": false, 00:13:41.184 "write_zeroes": true, 00:13:41.184 "zcopy": true, 00:13:41.184 "get_zone_info": false, 00:13:41.184 "zone_management": false, 00:13:41.184 "zone_append": false, 00:13:41.184 "compare": false, 00:13:41.184 "compare_and_write": false, 00:13:41.184 "abort": true, 00:13:41.184 "seek_hole": false, 00:13:41.184 "seek_data": false, 00:13:41.184 "copy": true, 00:13:41.184 "nvme_iov_md": false 00:13:41.184 }, 00:13:41.184 "memory_domains": [ 00:13:41.184 { 00:13:41.184 "dma_device_id": "system", 00:13:41.184 "dma_device_type": 1 00:13:41.184 }, 00:13:41.185 { 00:13:41.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.185 "dma_device_type": 2 00:13:41.185 } 00:13:41.185 ], 00:13:41.185 "driver_specific": {} 00:13:41.185 } 00:13:41.185 ] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 BaseBdev3 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 [ 00:13:41.185 { 00:13:41.185 "name": "BaseBdev3", 00:13:41.185 "aliases": [ 00:13:41.185 "ea095901-e883-4bdf-be14-968609801c1e" 00:13:41.185 ], 00:13:41.185 "product_name": "Malloc disk", 00:13:41.185 "block_size": 512, 00:13:41.185 "num_blocks": 65536, 00:13:41.185 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:41.185 "assigned_rate_limits": { 00:13:41.185 "rw_ios_per_sec": 0, 00:13:41.185 "rw_mbytes_per_sec": 0, 00:13:41.185 "r_mbytes_per_sec": 0, 00:13:41.185 "w_mbytes_per_sec": 0 00:13:41.185 }, 00:13:41.185 "claimed": false, 00:13:41.185 "zoned": false, 00:13:41.185 "supported_io_types": { 00:13:41.185 "read": true, 00:13:41.185 "write": true, 00:13:41.185 "unmap": true, 00:13:41.185 "flush": true, 00:13:41.185 "reset": true, 00:13:41.185 "nvme_admin": false, 00:13:41.185 "nvme_io": false, 00:13:41.185 "nvme_io_md": false, 00:13:41.185 "write_zeroes": true, 00:13:41.185 "zcopy": true, 00:13:41.185 "get_zone_info": false, 00:13:41.185 "zone_management": false, 00:13:41.185 "zone_append": false, 00:13:41.185 "compare": false, 00:13:41.185 "compare_and_write": false, 00:13:41.185 "abort": true, 00:13:41.185 "seek_hole": false, 00:13:41.185 "seek_data": false, 00:13:41.185 "copy": true, 00:13:41.185 "nvme_iov_md": false 00:13:41.185 }, 00:13:41.185 "memory_domains": [ 00:13:41.185 { 00:13:41.185 "dma_device_id": "system", 00:13:41.185 "dma_device_type": 1 00:13:41.185 }, 00:13:41.185 { 00:13:41.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.185 "dma_device_type": 2 00:13:41.185 } 00:13:41.185 ], 00:13:41.185 "driver_specific": {} 00:13:41.185 } 00:13:41.185 ] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 BaseBdev4 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 [ 00:13:41.185 { 00:13:41.185 "name": "BaseBdev4", 00:13:41.185 "aliases": [ 00:13:41.185 "571bebfb-5c13-4648-bc46-63cc6f23a0ff" 00:13:41.185 ], 00:13:41.185 "product_name": "Malloc disk", 00:13:41.185 "block_size": 512, 00:13:41.185 "num_blocks": 65536, 00:13:41.185 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:41.185 "assigned_rate_limits": { 00:13:41.185 "rw_ios_per_sec": 0, 00:13:41.185 "rw_mbytes_per_sec": 0, 00:13:41.185 "r_mbytes_per_sec": 0, 00:13:41.185 "w_mbytes_per_sec": 0 00:13:41.185 }, 00:13:41.185 "claimed": false, 00:13:41.185 "zoned": false, 00:13:41.185 "supported_io_types": { 00:13:41.185 "read": true, 00:13:41.185 "write": true, 00:13:41.185 "unmap": true, 00:13:41.185 "flush": true, 00:13:41.185 "reset": true, 00:13:41.185 "nvme_admin": false, 00:13:41.185 "nvme_io": false, 00:13:41.185 "nvme_io_md": false, 00:13:41.185 "write_zeroes": true, 00:13:41.185 "zcopy": true, 00:13:41.185 "get_zone_info": false, 00:13:41.185 "zone_management": false, 00:13:41.185 "zone_append": false, 00:13:41.185 "compare": false, 00:13:41.185 "compare_and_write": false, 00:13:41.185 "abort": true, 00:13:41.185 "seek_hole": false, 00:13:41.185 "seek_data": false, 00:13:41.185 "copy": true, 00:13:41.185 "nvme_iov_md": false 00:13:41.185 }, 00:13:41.185 "memory_domains": [ 00:13:41.185 { 00:13:41.185 "dma_device_id": "system", 00:13:41.185 "dma_device_type": 1 00:13:41.185 }, 00:13:41.185 { 00:13:41.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.185 "dma_device_type": 2 00:13:41.185 } 00:13:41.185 ], 00:13:41.185 "driver_specific": {} 00:13:41.185 } 00:13:41.185 ] 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.185 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.185 [2024-09-27 22:30:37.057647] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.185 [2024-09-27 22:30:37.057708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.185 [2024-09-27 22:30:37.057738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.185 [2024-09-27 22:30:37.060138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.185 [2024-09-27 22:30:37.060351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.443 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.444 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.444 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.444 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.444 "name": "Existed_Raid", 00:13:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.444 "strip_size_kb": 64, 00:13:41.444 "state": "configuring", 00:13:41.444 "raid_level": "concat", 00:13:41.444 "superblock": false, 00:13:41.444 "num_base_bdevs": 4, 00:13:41.444 "num_base_bdevs_discovered": 3, 00:13:41.444 "num_base_bdevs_operational": 4, 00:13:41.444 "base_bdevs_list": [ 00:13:41.444 { 00:13:41.444 "name": "BaseBdev1", 00:13:41.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.444 "is_configured": false, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 0 00:13:41.444 }, 00:13:41.444 { 00:13:41.444 "name": "BaseBdev2", 00:13:41.444 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:41.444 "is_configured": true, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 65536 00:13:41.444 }, 00:13:41.444 { 00:13:41.444 "name": "BaseBdev3", 00:13:41.444 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:41.444 "is_configured": true, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 65536 00:13:41.444 }, 00:13:41.444 { 00:13:41.444 "name": "BaseBdev4", 00:13:41.444 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:41.444 "is_configured": true, 00:13:41.444 "data_offset": 0, 00:13:41.444 "data_size": 65536 00:13:41.444 } 00:13:41.444 ] 00:13:41.444 }' 00:13:41.444 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.444 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 [2024-09-27 22:30:37.497093] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.702 "name": "Existed_Raid", 00:13:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.702 "strip_size_kb": 64, 00:13:41.702 "state": "configuring", 00:13:41.702 "raid_level": "concat", 00:13:41.702 "superblock": false, 00:13:41.702 "num_base_bdevs": 4, 00:13:41.702 "num_base_bdevs_discovered": 2, 00:13:41.702 "num_base_bdevs_operational": 4, 00:13:41.702 "base_bdevs_list": [ 00:13:41.702 { 00:13:41.702 "name": "BaseBdev1", 00:13:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.702 "is_configured": false, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 0 00:13:41.702 }, 00:13:41.702 { 00:13:41.702 "name": null, 00:13:41.702 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:41.702 "is_configured": false, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 65536 00:13:41.702 }, 00:13:41.702 { 00:13:41.702 "name": "BaseBdev3", 00:13:41.702 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:41.702 "is_configured": true, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 65536 00:13:41.702 }, 00:13:41.702 { 00:13:41.702 "name": "BaseBdev4", 00:13:41.702 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:41.702 "is_configured": true, 00:13:41.702 "data_offset": 0, 00:13:41.702 "data_size": 65536 00:13:41.702 } 00:13:41.702 ] 00:13:41.702 }' 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.702 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:42.269 22:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.269 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.269 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 22:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 [2024-09-27 22:30:38.050948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.269 BaseBdev1 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 [ 00:13:42.269 { 00:13:42.269 "name": "BaseBdev1", 00:13:42.269 "aliases": [ 00:13:42.269 "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5" 00:13:42.269 ], 00:13:42.269 "product_name": "Malloc disk", 00:13:42.269 "block_size": 512, 00:13:42.269 "num_blocks": 65536, 00:13:42.269 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:42.269 "assigned_rate_limits": { 00:13:42.269 "rw_ios_per_sec": 0, 00:13:42.269 "rw_mbytes_per_sec": 0, 00:13:42.269 "r_mbytes_per_sec": 0, 00:13:42.269 "w_mbytes_per_sec": 0 00:13:42.269 }, 00:13:42.269 "claimed": true, 00:13:42.269 "claim_type": "exclusive_write", 00:13:42.269 "zoned": false, 00:13:42.269 "supported_io_types": { 00:13:42.269 "read": true, 00:13:42.269 "write": true, 00:13:42.269 "unmap": true, 00:13:42.269 "flush": true, 00:13:42.269 "reset": true, 00:13:42.269 "nvme_admin": false, 00:13:42.269 "nvme_io": false, 00:13:42.269 "nvme_io_md": false, 00:13:42.269 "write_zeroes": true, 00:13:42.269 "zcopy": true, 00:13:42.269 "get_zone_info": false, 00:13:42.269 "zone_management": false, 00:13:42.269 "zone_append": false, 00:13:42.269 "compare": false, 00:13:42.269 "compare_and_write": false, 00:13:42.269 "abort": true, 00:13:42.269 "seek_hole": false, 00:13:42.269 "seek_data": false, 00:13:42.269 "copy": true, 00:13:42.269 "nvme_iov_md": false 00:13:42.269 }, 00:13:42.269 "memory_domains": [ 00:13:42.269 { 00:13:42.269 "dma_device_id": "system", 00:13:42.269 "dma_device_type": 1 00:13:42.269 }, 00:13:42.269 { 00:13:42.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.269 "dma_device_type": 2 00:13:42.269 } 00:13:42.269 ], 00:13:42.269 "driver_specific": {} 00:13:42.269 } 00:13:42.269 ] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.269 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.269 "name": "Existed_Raid", 00:13:42.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.269 "strip_size_kb": 64, 00:13:42.269 "state": "configuring", 00:13:42.270 "raid_level": "concat", 00:13:42.270 "superblock": false, 00:13:42.270 "num_base_bdevs": 4, 00:13:42.270 "num_base_bdevs_discovered": 3, 00:13:42.270 "num_base_bdevs_operational": 4, 00:13:42.270 "base_bdevs_list": [ 00:13:42.270 { 00:13:42.270 "name": "BaseBdev1", 00:13:42.270 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:42.270 "is_configured": true, 00:13:42.270 "data_offset": 0, 00:13:42.270 "data_size": 65536 00:13:42.270 }, 00:13:42.270 { 00:13:42.270 "name": null, 00:13:42.270 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:42.270 "is_configured": false, 00:13:42.270 "data_offset": 0, 00:13:42.270 "data_size": 65536 00:13:42.270 }, 00:13:42.270 { 00:13:42.270 "name": "BaseBdev3", 00:13:42.270 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:42.270 "is_configured": true, 00:13:42.270 "data_offset": 0, 00:13:42.270 "data_size": 65536 00:13:42.270 }, 00:13:42.270 { 00:13:42.270 "name": "BaseBdev4", 00:13:42.270 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:42.270 "is_configured": true, 00:13:42.270 "data_offset": 0, 00:13:42.270 "data_size": 65536 00:13:42.270 } 00:13:42.270 ] 00:13:42.270 }' 00:13:42.270 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.270 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:42.834 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.835 [2024-09-27 22:30:38.550344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.835 "name": "Existed_Raid", 00:13:42.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.835 "strip_size_kb": 64, 00:13:42.835 "state": "configuring", 00:13:42.835 "raid_level": "concat", 00:13:42.835 "superblock": false, 00:13:42.835 "num_base_bdevs": 4, 00:13:42.835 "num_base_bdevs_discovered": 2, 00:13:42.835 "num_base_bdevs_operational": 4, 00:13:42.835 "base_bdevs_list": [ 00:13:42.835 { 00:13:42.835 "name": "BaseBdev1", 00:13:42.835 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:42.835 "is_configured": true, 00:13:42.835 "data_offset": 0, 00:13:42.835 "data_size": 65536 00:13:42.835 }, 00:13:42.835 { 00:13:42.835 "name": null, 00:13:42.835 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:42.835 "is_configured": false, 00:13:42.835 "data_offset": 0, 00:13:42.835 "data_size": 65536 00:13:42.835 }, 00:13:42.835 { 00:13:42.835 "name": null, 00:13:42.835 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:42.835 "is_configured": false, 00:13:42.835 "data_offset": 0, 00:13:42.835 "data_size": 65536 00:13:42.835 }, 00:13:42.835 { 00:13:42.835 "name": "BaseBdev4", 00:13:42.835 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:42.835 "is_configured": true, 00:13:42.835 "data_offset": 0, 00:13:42.835 "data_size": 65536 00:13:42.835 } 00:13:42.835 ] 00:13:42.835 }' 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.835 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.401 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.401 22:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:43.401 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.401 22:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.401 [2024-09-27 22:30:39.041801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.401 "name": "Existed_Raid", 00:13:43.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.401 "strip_size_kb": 64, 00:13:43.401 "state": "configuring", 00:13:43.401 "raid_level": "concat", 00:13:43.401 "superblock": false, 00:13:43.401 "num_base_bdevs": 4, 00:13:43.401 "num_base_bdevs_discovered": 3, 00:13:43.401 "num_base_bdevs_operational": 4, 00:13:43.401 "base_bdevs_list": [ 00:13:43.401 { 00:13:43.401 "name": "BaseBdev1", 00:13:43.401 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:43.401 "is_configured": true, 00:13:43.401 "data_offset": 0, 00:13:43.401 "data_size": 65536 00:13:43.401 }, 00:13:43.401 { 00:13:43.401 "name": null, 00:13:43.401 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:43.401 "is_configured": false, 00:13:43.401 "data_offset": 0, 00:13:43.401 "data_size": 65536 00:13:43.401 }, 00:13:43.401 { 00:13:43.401 "name": "BaseBdev3", 00:13:43.401 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:43.401 "is_configured": true, 00:13:43.401 "data_offset": 0, 00:13:43.401 "data_size": 65536 00:13:43.401 }, 00:13:43.401 { 00:13:43.401 "name": "BaseBdev4", 00:13:43.401 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:43.401 "is_configured": true, 00:13:43.401 "data_offset": 0, 00:13:43.401 "data_size": 65536 00:13:43.401 } 00:13:43.401 ] 00:13:43.401 }' 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.401 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.660 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.660 [2024-09-27 22:30:39.505192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.919 "name": "Existed_Raid", 00:13:43.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.919 "strip_size_kb": 64, 00:13:43.919 "state": "configuring", 00:13:43.919 "raid_level": "concat", 00:13:43.919 "superblock": false, 00:13:43.919 "num_base_bdevs": 4, 00:13:43.919 "num_base_bdevs_discovered": 2, 00:13:43.919 "num_base_bdevs_operational": 4, 00:13:43.919 "base_bdevs_list": [ 00:13:43.919 { 00:13:43.919 "name": null, 00:13:43.919 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:43.919 "is_configured": false, 00:13:43.919 "data_offset": 0, 00:13:43.919 "data_size": 65536 00:13:43.919 }, 00:13:43.919 { 00:13:43.919 "name": null, 00:13:43.919 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:43.919 "is_configured": false, 00:13:43.919 "data_offset": 0, 00:13:43.919 "data_size": 65536 00:13:43.919 }, 00:13:43.919 { 00:13:43.919 "name": "BaseBdev3", 00:13:43.919 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:43.919 "is_configured": true, 00:13:43.919 "data_offset": 0, 00:13:43.919 "data_size": 65536 00:13:43.919 }, 00:13:43.919 { 00:13:43.919 "name": "BaseBdev4", 00:13:43.919 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:43.919 "is_configured": true, 00:13:43.919 "data_offset": 0, 00:13:43.919 "data_size": 65536 00:13:43.919 } 00:13:43.919 ] 00:13:43.919 }' 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.919 22:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.177 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:44.177 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.177 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.177 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.435 [2024-09-27 22:30:40.063092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.435 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.435 "name": "Existed_Raid", 00:13:44.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.435 "strip_size_kb": 64, 00:13:44.435 "state": "configuring", 00:13:44.435 "raid_level": "concat", 00:13:44.435 "superblock": false, 00:13:44.435 "num_base_bdevs": 4, 00:13:44.435 "num_base_bdevs_discovered": 3, 00:13:44.435 "num_base_bdevs_operational": 4, 00:13:44.435 "base_bdevs_list": [ 00:13:44.435 { 00:13:44.435 "name": null, 00:13:44.435 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:44.435 "is_configured": false, 00:13:44.435 "data_offset": 0, 00:13:44.435 "data_size": 65536 00:13:44.435 }, 00:13:44.435 { 00:13:44.435 "name": "BaseBdev2", 00:13:44.435 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:44.435 "is_configured": true, 00:13:44.435 "data_offset": 0, 00:13:44.435 "data_size": 65536 00:13:44.435 }, 00:13:44.435 { 00:13:44.435 "name": "BaseBdev3", 00:13:44.435 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:44.435 "is_configured": true, 00:13:44.435 "data_offset": 0, 00:13:44.435 "data_size": 65536 00:13:44.435 }, 00:13:44.435 { 00:13:44.435 "name": "BaseBdev4", 00:13:44.435 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:44.435 "is_configured": true, 00:13:44.435 "data_offset": 0, 00:13:44.436 "data_size": 65536 00:13:44.436 } 00:13:44.436 ] 00:13:44.436 }' 00:13:44.436 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.436 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.694 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fcb212fc-76cd-46e4-8d3c-83a5fb5337c5 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 [2024-09-27 22:30:40.653658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:44.953 [2024-09-27 22:30:40.653726] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:44.953 [2024-09-27 22:30:40.653736] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:44.953 [2024-09-27 22:30:40.654074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:44.953 [2024-09-27 22:30:40.654230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:44.953 [2024-09-27 22:30:40.654244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:44.953 [2024-09-27 22:30:40.654527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.953 NewBaseBdev 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 [ 00:13:44.953 { 00:13:44.953 "name": "NewBaseBdev", 00:13:44.953 "aliases": [ 00:13:44.953 "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5" 00:13:44.953 ], 00:13:44.953 "product_name": "Malloc disk", 00:13:44.953 "block_size": 512, 00:13:44.953 "num_blocks": 65536, 00:13:44.953 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:44.953 "assigned_rate_limits": { 00:13:44.953 "rw_ios_per_sec": 0, 00:13:44.953 "rw_mbytes_per_sec": 0, 00:13:44.953 "r_mbytes_per_sec": 0, 00:13:44.953 "w_mbytes_per_sec": 0 00:13:44.953 }, 00:13:44.953 "claimed": true, 00:13:44.953 "claim_type": "exclusive_write", 00:13:44.953 "zoned": false, 00:13:44.953 "supported_io_types": { 00:13:44.953 "read": true, 00:13:44.953 "write": true, 00:13:44.953 "unmap": true, 00:13:44.953 "flush": true, 00:13:44.953 "reset": true, 00:13:44.953 "nvme_admin": false, 00:13:44.953 "nvme_io": false, 00:13:44.953 "nvme_io_md": false, 00:13:44.953 "write_zeroes": true, 00:13:44.953 "zcopy": true, 00:13:44.953 "get_zone_info": false, 00:13:44.953 "zone_management": false, 00:13:44.953 "zone_append": false, 00:13:44.953 "compare": false, 00:13:44.953 "compare_and_write": false, 00:13:44.953 "abort": true, 00:13:44.953 "seek_hole": false, 00:13:44.953 "seek_data": false, 00:13:44.953 "copy": true, 00:13:44.953 "nvme_iov_md": false 00:13:44.953 }, 00:13:44.953 "memory_domains": [ 00:13:44.953 { 00:13:44.953 "dma_device_id": "system", 00:13:44.953 "dma_device_type": 1 00:13:44.953 }, 00:13:44.953 { 00:13:44.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.953 "dma_device_type": 2 00:13:44.953 } 00:13:44.953 ], 00:13:44.953 "driver_specific": {} 00:13:44.953 } 00:13:44.953 ] 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.953 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.954 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.954 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.954 "name": "Existed_Raid", 00:13:44.954 "uuid": "bfca206e-c5b7-4ff2-b87b-64daad0ed583", 00:13:44.954 "strip_size_kb": 64, 00:13:44.954 "state": "online", 00:13:44.954 "raid_level": "concat", 00:13:44.954 "superblock": false, 00:13:44.954 "num_base_bdevs": 4, 00:13:44.954 "num_base_bdevs_discovered": 4, 00:13:44.954 "num_base_bdevs_operational": 4, 00:13:44.954 "base_bdevs_list": [ 00:13:44.954 { 00:13:44.954 "name": "NewBaseBdev", 00:13:44.954 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 0, 00:13:44.954 "data_size": 65536 00:13:44.954 }, 00:13:44.954 { 00:13:44.954 "name": "BaseBdev2", 00:13:44.954 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 0, 00:13:44.954 "data_size": 65536 00:13:44.954 }, 00:13:44.954 { 00:13:44.954 "name": "BaseBdev3", 00:13:44.954 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 0, 00:13:44.954 "data_size": 65536 00:13:44.954 }, 00:13:44.954 { 00:13:44.954 "name": "BaseBdev4", 00:13:44.954 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 0, 00:13:44.954 "data_size": 65536 00:13:44.954 } 00:13:44.954 ] 00:13:44.954 }' 00:13:44.954 22:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.954 22:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.543 [2024-09-27 22:30:41.133487] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.543 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:45.543 "name": "Existed_Raid", 00:13:45.543 "aliases": [ 00:13:45.543 "bfca206e-c5b7-4ff2-b87b-64daad0ed583" 00:13:45.543 ], 00:13:45.543 "product_name": "Raid Volume", 00:13:45.543 "block_size": 512, 00:13:45.543 "num_blocks": 262144, 00:13:45.543 "uuid": "bfca206e-c5b7-4ff2-b87b-64daad0ed583", 00:13:45.543 "assigned_rate_limits": { 00:13:45.543 "rw_ios_per_sec": 0, 00:13:45.543 "rw_mbytes_per_sec": 0, 00:13:45.543 "r_mbytes_per_sec": 0, 00:13:45.543 "w_mbytes_per_sec": 0 00:13:45.543 }, 00:13:45.543 "claimed": false, 00:13:45.543 "zoned": false, 00:13:45.543 "supported_io_types": { 00:13:45.543 "read": true, 00:13:45.543 "write": true, 00:13:45.543 "unmap": true, 00:13:45.543 "flush": true, 00:13:45.543 "reset": true, 00:13:45.543 "nvme_admin": false, 00:13:45.543 "nvme_io": false, 00:13:45.543 "nvme_io_md": false, 00:13:45.543 "write_zeroes": true, 00:13:45.543 "zcopy": false, 00:13:45.543 "get_zone_info": false, 00:13:45.543 "zone_management": false, 00:13:45.543 "zone_append": false, 00:13:45.543 "compare": false, 00:13:45.543 "compare_and_write": false, 00:13:45.543 "abort": false, 00:13:45.543 "seek_hole": false, 00:13:45.543 "seek_data": false, 00:13:45.543 "copy": false, 00:13:45.543 "nvme_iov_md": false 00:13:45.543 }, 00:13:45.543 "memory_domains": [ 00:13:45.543 { 00:13:45.543 "dma_device_id": "system", 00:13:45.543 "dma_device_type": 1 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.543 "dma_device_type": 2 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "system", 00:13:45.543 "dma_device_type": 1 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.543 "dma_device_type": 2 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "system", 00:13:45.543 "dma_device_type": 1 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.543 "dma_device_type": 2 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "system", 00:13:45.543 "dma_device_type": 1 00:13:45.543 }, 00:13:45.543 { 00:13:45.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.543 "dma_device_type": 2 00:13:45.543 } 00:13:45.543 ], 00:13:45.543 "driver_specific": { 00:13:45.543 "raid": { 00:13:45.543 "uuid": "bfca206e-c5b7-4ff2-b87b-64daad0ed583", 00:13:45.543 "strip_size_kb": 64, 00:13:45.543 "state": "online", 00:13:45.543 "raid_level": "concat", 00:13:45.543 "superblock": false, 00:13:45.543 "num_base_bdevs": 4, 00:13:45.543 "num_base_bdevs_discovered": 4, 00:13:45.543 "num_base_bdevs_operational": 4, 00:13:45.543 "base_bdevs_list": [ 00:13:45.543 { 00:13:45.544 "name": "NewBaseBdev", 00:13:45.544 "uuid": "fcb212fc-76cd-46e4-8d3c-83a5fb5337c5", 00:13:45.544 "is_configured": true, 00:13:45.544 "data_offset": 0, 00:13:45.544 "data_size": 65536 00:13:45.544 }, 00:13:45.544 { 00:13:45.544 "name": "BaseBdev2", 00:13:45.544 "uuid": "7541e91e-d172-4f2e-93d0-517dcddc7078", 00:13:45.544 "is_configured": true, 00:13:45.544 "data_offset": 0, 00:13:45.544 "data_size": 65536 00:13:45.544 }, 00:13:45.544 { 00:13:45.544 "name": "BaseBdev3", 00:13:45.544 "uuid": "ea095901-e883-4bdf-be14-968609801c1e", 00:13:45.544 "is_configured": true, 00:13:45.544 "data_offset": 0, 00:13:45.544 "data_size": 65536 00:13:45.544 }, 00:13:45.544 { 00:13:45.544 "name": "BaseBdev4", 00:13:45.544 "uuid": "571bebfb-5c13-4648-bc46-63cc6f23a0ff", 00:13:45.544 "is_configured": true, 00:13:45.544 "data_offset": 0, 00:13:45.544 "data_size": 65536 00:13:45.544 } 00:13:45.544 ] 00:13:45.544 } 00:13:45.544 } 00:13:45.544 }' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:45.544 BaseBdev2 00:13:45.544 BaseBdev3 00:13:45.544 BaseBdev4' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.544 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.803 [2024-09-27 22:30:41.489136] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:45.803 [2024-09-27 22:30:41.489175] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.803 [2024-09-27 22:30:41.489264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.803 [2024-09-27 22:30:41.489338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.803 [2024-09-27 22:30:41.489351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72000 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72000 ']' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72000 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72000 00:13:45.803 killing process with pid 72000 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72000' 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72000 00:13:45.803 [2024-09-27 22:30:41.528345] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.803 22:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72000 00:13:46.369 [2024-09-27 22:30:41.970082] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.270 ************************************ 00:13:48.270 END TEST raid_state_function_test 00:13:48.270 ************************************ 00:13:48.270 22:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:48.270 00:13:48.270 real 0m12.984s 00:13:48.270 user 0m19.686s 00:13:48.270 sys 0m2.511s 00:13:48.270 22:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.270 22:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 22:30:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:48.567 22:30:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:48.567 22:30:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.567 22:30:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 ************************************ 00:13:48.567 START TEST raid_state_function_test_sb 00:13:48.567 ************************************ 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:48.567 Process raid pid: 72688 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72688 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72688' 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72688 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72688 ']' 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.567 22:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 [2024-09-27 22:30:44.302338] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:13:48.567 [2024-09-27 22:30:44.302475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.847 [2024-09-27 22:30:44.478274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.106 [2024-09-27 22:30:44.731726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.364 [2024-09-27 22:30:44.988664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.364 [2024-09-27 22:30:44.988999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.931 [2024-09-27 22:30:45.515792] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.931 [2024-09-27 22:30:45.515862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.931 [2024-09-27 22:30:45.515875] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.931 [2024-09-27 22:30:45.515889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.931 [2024-09-27 22:30:45.515897] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:49.931 [2024-09-27 22:30:45.515913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:49.931 [2024-09-27 22:30:45.515921] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:49.931 [2024-09-27 22:30:45.515934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.931 "name": "Existed_Raid", 00:13:49.931 "uuid": "c8e3e40f-55a0-4b42-b7c3-d57e342a2f88", 00:13:49.931 "strip_size_kb": 64, 00:13:49.931 "state": "configuring", 00:13:49.931 "raid_level": "concat", 00:13:49.931 "superblock": true, 00:13:49.931 "num_base_bdevs": 4, 00:13:49.931 "num_base_bdevs_discovered": 0, 00:13:49.931 "num_base_bdevs_operational": 4, 00:13:49.931 "base_bdevs_list": [ 00:13:49.931 { 00:13:49.931 "name": "BaseBdev1", 00:13:49.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.931 "is_configured": false, 00:13:49.931 "data_offset": 0, 00:13:49.931 "data_size": 0 00:13:49.931 }, 00:13:49.931 { 00:13:49.931 "name": "BaseBdev2", 00:13:49.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.931 "is_configured": false, 00:13:49.931 "data_offset": 0, 00:13:49.931 "data_size": 0 00:13:49.931 }, 00:13:49.931 { 00:13:49.931 "name": "BaseBdev3", 00:13:49.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.931 "is_configured": false, 00:13:49.931 "data_offset": 0, 00:13:49.931 "data_size": 0 00:13:49.931 }, 00:13:49.931 { 00:13:49.931 "name": "BaseBdev4", 00:13:49.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.931 "is_configured": false, 00:13:49.931 "data_offset": 0, 00:13:49.931 "data_size": 0 00:13:49.931 } 00:13:49.931 ] 00:13:49.931 }' 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.931 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.190 [2024-09-27 22:30:45.991561] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.190 [2024-09-27 22:30:45.991613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.190 22:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.190 [2024-09-27 22:30:46.003613] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.190 [2024-09-27 22:30:46.003845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.190 [2024-09-27 22:30:46.003868] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.190 [2024-09-27 22:30:46.003883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.190 [2024-09-27 22:30:46.003891] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.190 [2024-09-27 22:30:46.003904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.190 [2024-09-27 22:30:46.003912] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.190 [2024-09-27 22:30:46.003925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.190 [2024-09-27 22:30:46.062118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.190 BaseBdev1 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.190 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.191 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.449 [ 00:13:50.449 { 00:13:50.449 "name": "BaseBdev1", 00:13:50.449 "aliases": [ 00:13:50.449 "bedeb38b-3c16-4e21-9a3b-10e149609874" 00:13:50.449 ], 00:13:50.449 "product_name": "Malloc disk", 00:13:50.449 "block_size": 512, 00:13:50.449 "num_blocks": 65536, 00:13:50.449 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:50.449 "assigned_rate_limits": { 00:13:50.449 "rw_ios_per_sec": 0, 00:13:50.449 "rw_mbytes_per_sec": 0, 00:13:50.449 "r_mbytes_per_sec": 0, 00:13:50.449 "w_mbytes_per_sec": 0 00:13:50.449 }, 00:13:50.449 "claimed": true, 00:13:50.449 "claim_type": "exclusive_write", 00:13:50.449 "zoned": false, 00:13:50.449 "supported_io_types": { 00:13:50.449 "read": true, 00:13:50.449 "write": true, 00:13:50.449 "unmap": true, 00:13:50.449 "flush": true, 00:13:50.449 "reset": true, 00:13:50.449 "nvme_admin": false, 00:13:50.449 "nvme_io": false, 00:13:50.449 "nvme_io_md": false, 00:13:50.449 "write_zeroes": true, 00:13:50.449 "zcopy": true, 00:13:50.449 "get_zone_info": false, 00:13:50.449 "zone_management": false, 00:13:50.449 "zone_append": false, 00:13:50.449 "compare": false, 00:13:50.449 "compare_and_write": false, 00:13:50.449 "abort": true, 00:13:50.449 "seek_hole": false, 00:13:50.449 "seek_data": false, 00:13:50.449 "copy": true, 00:13:50.449 "nvme_iov_md": false 00:13:50.449 }, 00:13:50.449 "memory_domains": [ 00:13:50.449 { 00:13:50.449 "dma_device_id": "system", 00:13:50.449 "dma_device_type": 1 00:13:50.449 }, 00:13:50.449 { 00:13:50.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.449 "dma_device_type": 2 00:13:50.449 } 00:13:50.449 ], 00:13:50.449 "driver_specific": {} 00:13:50.449 } 00:13:50.449 ] 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:50.449 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.450 "name": "Existed_Raid", 00:13:50.450 "uuid": "7da6e245-d941-46e7-83bb-ac285cef18dd", 00:13:50.450 "strip_size_kb": 64, 00:13:50.450 "state": "configuring", 00:13:50.450 "raid_level": "concat", 00:13:50.450 "superblock": true, 00:13:50.450 "num_base_bdevs": 4, 00:13:50.450 "num_base_bdevs_discovered": 1, 00:13:50.450 "num_base_bdevs_operational": 4, 00:13:50.450 "base_bdevs_list": [ 00:13:50.450 { 00:13:50.450 "name": "BaseBdev1", 00:13:50.450 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:50.450 "is_configured": true, 00:13:50.450 "data_offset": 2048, 00:13:50.450 "data_size": 63488 00:13:50.450 }, 00:13:50.450 { 00:13:50.450 "name": "BaseBdev2", 00:13:50.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.450 "is_configured": false, 00:13:50.450 "data_offset": 0, 00:13:50.450 "data_size": 0 00:13:50.450 }, 00:13:50.450 { 00:13:50.450 "name": "BaseBdev3", 00:13:50.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.450 "is_configured": false, 00:13:50.450 "data_offset": 0, 00:13:50.450 "data_size": 0 00:13:50.450 }, 00:13:50.450 { 00:13:50.450 "name": "BaseBdev4", 00:13:50.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.450 "is_configured": false, 00:13:50.450 "data_offset": 0, 00:13:50.450 "data_size": 0 00:13:50.450 } 00:13:50.450 ] 00:13:50.450 }' 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.450 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.709 [2024-09-27 22:30:46.577772] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.709 [2024-09-27 22:30:46.578014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.709 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.709 [2024-09-27 22:30:46.585820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.968 [2024-09-27 22:30:46.588222] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.968 [2024-09-27 22:30:46.588276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.968 [2024-09-27 22:30:46.588288] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:50.968 [2024-09-27 22:30:46.588305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:50.968 [2024-09-27 22:30:46.588313] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:50.968 [2024-09-27 22:30:46.588326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.968 "name": "Existed_Raid", 00:13:50.968 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:50.968 "strip_size_kb": 64, 00:13:50.968 "state": "configuring", 00:13:50.968 "raid_level": "concat", 00:13:50.968 "superblock": true, 00:13:50.968 "num_base_bdevs": 4, 00:13:50.968 "num_base_bdevs_discovered": 1, 00:13:50.968 "num_base_bdevs_operational": 4, 00:13:50.968 "base_bdevs_list": [ 00:13:50.968 { 00:13:50.968 "name": "BaseBdev1", 00:13:50.968 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:50.968 "is_configured": true, 00:13:50.968 "data_offset": 2048, 00:13:50.968 "data_size": 63488 00:13:50.968 }, 00:13:50.968 { 00:13:50.968 "name": "BaseBdev2", 00:13:50.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.968 "is_configured": false, 00:13:50.968 "data_offset": 0, 00:13:50.968 "data_size": 0 00:13:50.968 }, 00:13:50.968 { 00:13:50.968 "name": "BaseBdev3", 00:13:50.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.968 "is_configured": false, 00:13:50.968 "data_offset": 0, 00:13:50.968 "data_size": 0 00:13:50.968 }, 00:13:50.968 { 00:13:50.968 "name": "BaseBdev4", 00:13:50.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.968 "is_configured": false, 00:13:50.968 "data_offset": 0, 00:13:50.968 "data_size": 0 00:13:50.968 } 00:13:50.968 ] 00:13:50.968 }' 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.968 22:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.227 [2024-09-27 22:30:47.092076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.227 BaseBdev2 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.227 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.487 [ 00:13:51.487 { 00:13:51.487 "name": "BaseBdev2", 00:13:51.487 "aliases": [ 00:13:51.487 "1d472770-ce62-4e3c-93ae-53d2e323b6d0" 00:13:51.487 ], 00:13:51.487 "product_name": "Malloc disk", 00:13:51.487 "block_size": 512, 00:13:51.487 "num_blocks": 65536, 00:13:51.487 "uuid": "1d472770-ce62-4e3c-93ae-53d2e323b6d0", 00:13:51.487 "assigned_rate_limits": { 00:13:51.487 "rw_ios_per_sec": 0, 00:13:51.487 "rw_mbytes_per_sec": 0, 00:13:51.487 "r_mbytes_per_sec": 0, 00:13:51.487 "w_mbytes_per_sec": 0 00:13:51.487 }, 00:13:51.487 "claimed": true, 00:13:51.487 "claim_type": "exclusive_write", 00:13:51.487 "zoned": false, 00:13:51.487 "supported_io_types": { 00:13:51.487 "read": true, 00:13:51.487 "write": true, 00:13:51.487 "unmap": true, 00:13:51.487 "flush": true, 00:13:51.487 "reset": true, 00:13:51.487 "nvme_admin": false, 00:13:51.487 "nvme_io": false, 00:13:51.487 "nvme_io_md": false, 00:13:51.487 "write_zeroes": true, 00:13:51.487 "zcopy": true, 00:13:51.487 "get_zone_info": false, 00:13:51.487 "zone_management": false, 00:13:51.487 "zone_append": false, 00:13:51.487 "compare": false, 00:13:51.487 "compare_and_write": false, 00:13:51.487 "abort": true, 00:13:51.487 "seek_hole": false, 00:13:51.487 "seek_data": false, 00:13:51.487 "copy": true, 00:13:51.487 "nvme_iov_md": false 00:13:51.487 }, 00:13:51.487 "memory_domains": [ 00:13:51.487 { 00:13:51.487 "dma_device_id": "system", 00:13:51.487 "dma_device_type": 1 00:13:51.487 }, 00:13:51.487 { 00:13:51.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.487 "dma_device_type": 2 00:13:51.487 } 00:13:51.487 ], 00:13:51.487 "driver_specific": {} 00:13:51.487 } 00:13:51.487 ] 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.487 "name": "Existed_Raid", 00:13:51.487 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:51.487 "strip_size_kb": 64, 00:13:51.487 "state": "configuring", 00:13:51.487 "raid_level": "concat", 00:13:51.487 "superblock": true, 00:13:51.487 "num_base_bdevs": 4, 00:13:51.487 "num_base_bdevs_discovered": 2, 00:13:51.487 "num_base_bdevs_operational": 4, 00:13:51.487 "base_bdevs_list": [ 00:13:51.487 { 00:13:51.487 "name": "BaseBdev1", 00:13:51.487 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:51.487 "is_configured": true, 00:13:51.487 "data_offset": 2048, 00:13:51.487 "data_size": 63488 00:13:51.487 }, 00:13:51.487 { 00:13:51.487 "name": "BaseBdev2", 00:13:51.487 "uuid": "1d472770-ce62-4e3c-93ae-53d2e323b6d0", 00:13:51.487 "is_configured": true, 00:13:51.487 "data_offset": 2048, 00:13:51.487 "data_size": 63488 00:13:51.487 }, 00:13:51.487 { 00:13:51.487 "name": "BaseBdev3", 00:13:51.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.487 "is_configured": false, 00:13:51.487 "data_offset": 0, 00:13:51.487 "data_size": 0 00:13:51.487 }, 00:13:51.487 { 00:13:51.487 "name": "BaseBdev4", 00:13:51.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.487 "is_configured": false, 00:13:51.487 "data_offset": 0, 00:13:51.487 "data_size": 0 00:13:51.487 } 00:13:51.487 ] 00:13:51.487 }' 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.487 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.746 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.746 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.746 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.004 [2024-09-27 22:30:47.637745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.004 BaseBdev3 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.004 [ 00:13:52.004 { 00:13:52.004 "name": "BaseBdev3", 00:13:52.004 "aliases": [ 00:13:52.004 "5d4184c3-8dc3-4441-9fb4-d382738eb184" 00:13:52.004 ], 00:13:52.004 "product_name": "Malloc disk", 00:13:52.004 "block_size": 512, 00:13:52.004 "num_blocks": 65536, 00:13:52.004 "uuid": "5d4184c3-8dc3-4441-9fb4-d382738eb184", 00:13:52.004 "assigned_rate_limits": { 00:13:52.004 "rw_ios_per_sec": 0, 00:13:52.004 "rw_mbytes_per_sec": 0, 00:13:52.004 "r_mbytes_per_sec": 0, 00:13:52.004 "w_mbytes_per_sec": 0 00:13:52.004 }, 00:13:52.004 "claimed": true, 00:13:52.004 "claim_type": "exclusive_write", 00:13:52.004 "zoned": false, 00:13:52.004 "supported_io_types": { 00:13:52.004 "read": true, 00:13:52.004 "write": true, 00:13:52.004 "unmap": true, 00:13:52.004 "flush": true, 00:13:52.004 "reset": true, 00:13:52.004 "nvme_admin": false, 00:13:52.004 "nvme_io": false, 00:13:52.004 "nvme_io_md": false, 00:13:52.004 "write_zeroes": true, 00:13:52.004 "zcopy": true, 00:13:52.004 "get_zone_info": false, 00:13:52.004 "zone_management": false, 00:13:52.004 "zone_append": false, 00:13:52.004 "compare": false, 00:13:52.004 "compare_and_write": false, 00:13:52.004 "abort": true, 00:13:52.004 "seek_hole": false, 00:13:52.004 "seek_data": false, 00:13:52.004 "copy": true, 00:13:52.004 "nvme_iov_md": false 00:13:52.004 }, 00:13:52.004 "memory_domains": [ 00:13:52.004 { 00:13:52.004 "dma_device_id": "system", 00:13:52.004 "dma_device_type": 1 00:13:52.004 }, 00:13:52.004 { 00:13:52.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.004 "dma_device_type": 2 00:13:52.004 } 00:13:52.004 ], 00:13:52.004 "driver_specific": {} 00:13:52.004 } 00:13:52.004 ] 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.004 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.005 "name": "Existed_Raid", 00:13:52.005 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:52.005 "strip_size_kb": 64, 00:13:52.005 "state": "configuring", 00:13:52.005 "raid_level": "concat", 00:13:52.005 "superblock": true, 00:13:52.005 "num_base_bdevs": 4, 00:13:52.005 "num_base_bdevs_discovered": 3, 00:13:52.005 "num_base_bdevs_operational": 4, 00:13:52.005 "base_bdevs_list": [ 00:13:52.005 { 00:13:52.005 "name": "BaseBdev1", 00:13:52.005 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:52.005 "is_configured": true, 00:13:52.005 "data_offset": 2048, 00:13:52.005 "data_size": 63488 00:13:52.005 }, 00:13:52.005 { 00:13:52.005 "name": "BaseBdev2", 00:13:52.005 "uuid": "1d472770-ce62-4e3c-93ae-53d2e323b6d0", 00:13:52.005 "is_configured": true, 00:13:52.005 "data_offset": 2048, 00:13:52.005 "data_size": 63488 00:13:52.005 }, 00:13:52.005 { 00:13:52.005 "name": "BaseBdev3", 00:13:52.005 "uuid": "5d4184c3-8dc3-4441-9fb4-d382738eb184", 00:13:52.005 "is_configured": true, 00:13:52.005 "data_offset": 2048, 00:13:52.005 "data_size": 63488 00:13:52.005 }, 00:13:52.005 { 00:13:52.005 "name": "BaseBdev4", 00:13:52.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.005 "is_configured": false, 00:13:52.005 "data_offset": 0, 00:13:52.005 "data_size": 0 00:13:52.005 } 00:13:52.005 ] 00:13:52.005 }' 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.005 22:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.263 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.263 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.263 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.523 [2024-09-27 22:30:48.168579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.523 [2024-09-27 22:30:48.168873] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:52.523 [2024-09-27 22:30:48.168894] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:52.523 [2024-09-27 22:30:48.169239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.523 BaseBdev4 00:13:52.523 [2024-09-27 22:30:48.169397] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:52.523 [2024-09-27 22:30:48.169414] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:52.523 [2024-09-27 22:30:48.169567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.523 [ 00:13:52.523 { 00:13:52.523 "name": "BaseBdev4", 00:13:52.523 "aliases": [ 00:13:52.523 "20f179cd-c180-455f-a90f-3e290ed1eadd" 00:13:52.523 ], 00:13:52.523 "product_name": "Malloc disk", 00:13:52.523 "block_size": 512, 00:13:52.523 "num_blocks": 65536, 00:13:52.523 "uuid": "20f179cd-c180-455f-a90f-3e290ed1eadd", 00:13:52.523 "assigned_rate_limits": { 00:13:52.523 "rw_ios_per_sec": 0, 00:13:52.523 "rw_mbytes_per_sec": 0, 00:13:52.523 "r_mbytes_per_sec": 0, 00:13:52.523 "w_mbytes_per_sec": 0 00:13:52.523 }, 00:13:52.523 "claimed": true, 00:13:52.523 "claim_type": "exclusive_write", 00:13:52.523 "zoned": false, 00:13:52.523 "supported_io_types": { 00:13:52.523 "read": true, 00:13:52.523 "write": true, 00:13:52.523 "unmap": true, 00:13:52.523 "flush": true, 00:13:52.523 "reset": true, 00:13:52.523 "nvme_admin": false, 00:13:52.523 "nvme_io": false, 00:13:52.523 "nvme_io_md": false, 00:13:52.523 "write_zeroes": true, 00:13:52.523 "zcopy": true, 00:13:52.523 "get_zone_info": false, 00:13:52.523 "zone_management": false, 00:13:52.523 "zone_append": false, 00:13:52.523 "compare": false, 00:13:52.523 "compare_and_write": false, 00:13:52.523 "abort": true, 00:13:52.523 "seek_hole": false, 00:13:52.523 "seek_data": false, 00:13:52.523 "copy": true, 00:13:52.523 "nvme_iov_md": false 00:13:52.523 }, 00:13:52.523 "memory_domains": [ 00:13:52.523 { 00:13:52.523 "dma_device_id": "system", 00:13:52.523 "dma_device_type": 1 00:13:52.523 }, 00:13:52.523 { 00:13:52.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.523 "dma_device_type": 2 00:13:52.523 } 00:13:52.523 ], 00:13:52.523 "driver_specific": {} 00:13:52.523 } 00:13:52.523 ] 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.523 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.524 "name": "Existed_Raid", 00:13:52.524 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:52.524 "strip_size_kb": 64, 00:13:52.524 "state": "online", 00:13:52.524 "raid_level": "concat", 00:13:52.524 "superblock": true, 00:13:52.524 "num_base_bdevs": 4, 00:13:52.524 "num_base_bdevs_discovered": 4, 00:13:52.524 "num_base_bdevs_operational": 4, 00:13:52.524 "base_bdevs_list": [ 00:13:52.524 { 00:13:52.524 "name": "BaseBdev1", 00:13:52.524 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:52.524 "is_configured": true, 00:13:52.524 "data_offset": 2048, 00:13:52.524 "data_size": 63488 00:13:52.524 }, 00:13:52.524 { 00:13:52.524 "name": "BaseBdev2", 00:13:52.524 "uuid": "1d472770-ce62-4e3c-93ae-53d2e323b6d0", 00:13:52.524 "is_configured": true, 00:13:52.524 "data_offset": 2048, 00:13:52.524 "data_size": 63488 00:13:52.524 }, 00:13:52.524 { 00:13:52.524 "name": "BaseBdev3", 00:13:52.524 "uuid": "5d4184c3-8dc3-4441-9fb4-d382738eb184", 00:13:52.524 "is_configured": true, 00:13:52.524 "data_offset": 2048, 00:13:52.524 "data_size": 63488 00:13:52.524 }, 00:13:52.524 { 00:13:52.524 "name": "BaseBdev4", 00:13:52.524 "uuid": "20f179cd-c180-455f-a90f-3e290ed1eadd", 00:13:52.524 "is_configured": true, 00:13:52.524 "data_offset": 2048, 00:13:52.524 "data_size": 63488 00:13:52.524 } 00:13:52.524 ] 00:13:52.524 }' 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.524 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.782 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:53.041 [2024-09-27 22:30:48.664335] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.041 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.041 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:53.041 "name": "Existed_Raid", 00:13:53.041 "aliases": [ 00:13:53.041 "6ae25474-fba3-409a-a692-105a0871736a" 00:13:53.041 ], 00:13:53.041 "product_name": "Raid Volume", 00:13:53.041 "block_size": 512, 00:13:53.041 "num_blocks": 253952, 00:13:53.041 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:53.041 "assigned_rate_limits": { 00:13:53.041 "rw_ios_per_sec": 0, 00:13:53.041 "rw_mbytes_per_sec": 0, 00:13:53.041 "r_mbytes_per_sec": 0, 00:13:53.041 "w_mbytes_per_sec": 0 00:13:53.041 }, 00:13:53.041 "claimed": false, 00:13:53.041 "zoned": false, 00:13:53.041 "supported_io_types": { 00:13:53.041 "read": true, 00:13:53.041 "write": true, 00:13:53.041 "unmap": true, 00:13:53.041 "flush": true, 00:13:53.041 "reset": true, 00:13:53.041 "nvme_admin": false, 00:13:53.041 "nvme_io": false, 00:13:53.041 "nvme_io_md": false, 00:13:53.041 "write_zeroes": true, 00:13:53.041 "zcopy": false, 00:13:53.041 "get_zone_info": false, 00:13:53.041 "zone_management": false, 00:13:53.041 "zone_append": false, 00:13:53.041 "compare": false, 00:13:53.041 "compare_and_write": false, 00:13:53.041 "abort": false, 00:13:53.041 "seek_hole": false, 00:13:53.041 "seek_data": false, 00:13:53.041 "copy": false, 00:13:53.041 "nvme_iov_md": false 00:13:53.041 }, 00:13:53.042 "memory_domains": [ 00:13:53.042 { 00:13:53.042 "dma_device_id": "system", 00:13:53.042 "dma_device_type": 1 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.042 "dma_device_type": 2 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "system", 00:13:53.042 "dma_device_type": 1 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.042 "dma_device_type": 2 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "system", 00:13:53.042 "dma_device_type": 1 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.042 "dma_device_type": 2 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "system", 00:13:53.042 "dma_device_type": 1 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.042 "dma_device_type": 2 00:13:53.042 } 00:13:53.042 ], 00:13:53.042 "driver_specific": { 00:13:53.042 "raid": { 00:13:53.042 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:53.042 "strip_size_kb": 64, 00:13:53.042 "state": "online", 00:13:53.042 "raid_level": "concat", 00:13:53.042 "superblock": true, 00:13:53.042 "num_base_bdevs": 4, 00:13:53.042 "num_base_bdevs_discovered": 4, 00:13:53.042 "num_base_bdevs_operational": 4, 00:13:53.042 "base_bdevs_list": [ 00:13:53.042 { 00:13:53.042 "name": "BaseBdev1", 00:13:53.042 "uuid": "bedeb38b-3c16-4e21-9a3b-10e149609874", 00:13:53.042 "is_configured": true, 00:13:53.042 "data_offset": 2048, 00:13:53.042 "data_size": 63488 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "name": "BaseBdev2", 00:13:53.042 "uuid": "1d472770-ce62-4e3c-93ae-53d2e323b6d0", 00:13:53.042 "is_configured": true, 00:13:53.042 "data_offset": 2048, 00:13:53.042 "data_size": 63488 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "name": "BaseBdev3", 00:13:53.042 "uuid": "5d4184c3-8dc3-4441-9fb4-d382738eb184", 00:13:53.042 "is_configured": true, 00:13:53.042 "data_offset": 2048, 00:13:53.042 "data_size": 63488 00:13:53.042 }, 00:13:53.042 { 00:13:53.042 "name": "BaseBdev4", 00:13:53.042 "uuid": "20f179cd-c180-455f-a90f-3e290ed1eadd", 00:13:53.042 "is_configured": true, 00:13:53.042 "data_offset": 2048, 00:13:53.042 "data_size": 63488 00:13:53.042 } 00:13:53.042 ] 00:13:53.042 } 00:13:53.042 } 00:13:53.042 }' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:53.042 BaseBdev2 00:13:53.042 BaseBdev3 00:13:53.042 BaseBdev4' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.042 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.301 22:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.301 [2024-09-27 22:30:48.979653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.301 [2024-09-27 22:30:48.979700] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.301 [2024-09-27 22:30:48.979756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.301 "name": "Existed_Raid", 00:13:53.301 "uuid": "6ae25474-fba3-409a-a692-105a0871736a", 00:13:53.301 "strip_size_kb": 64, 00:13:53.301 "state": "offline", 00:13:53.301 "raid_level": "concat", 00:13:53.301 "superblock": true, 00:13:53.301 "num_base_bdevs": 4, 00:13:53.301 "num_base_bdevs_discovered": 3, 00:13:53.301 "num_base_bdevs_operational": 3, 00:13:53.301 "base_bdevs_list": [ 00:13:53.301 { 00:13:53.301 "name": null, 00:13:53.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.301 "is_configured": false, 00:13:53.301 "data_offset": 0, 00:13:53.301 "data_size": 63488 00:13:53.301 }, 00:13:53.301 { 00:13:53.301 "name": "BaseBdev2", 00:13:53.301 "uuid": "1d472770-ce62-4e3c-93ae-53d2e323b6d0", 00:13:53.301 "is_configured": true, 00:13:53.301 "data_offset": 2048, 00:13:53.301 "data_size": 63488 00:13:53.301 }, 00:13:53.301 { 00:13:53.301 "name": "BaseBdev3", 00:13:53.301 "uuid": "5d4184c3-8dc3-4441-9fb4-d382738eb184", 00:13:53.301 "is_configured": true, 00:13:53.301 "data_offset": 2048, 00:13:53.301 "data_size": 63488 00:13:53.301 }, 00:13:53.301 { 00:13:53.301 "name": "BaseBdev4", 00:13:53.301 "uuid": "20f179cd-c180-455f-a90f-3e290ed1eadd", 00:13:53.301 "is_configured": true, 00:13:53.301 "data_offset": 2048, 00:13:53.301 "data_size": 63488 00:13:53.301 } 00:13:53.301 ] 00:13:53.301 }' 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.301 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.868 [2024-09-27 22:30:49.578453] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.868 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.127 [2024-09-27 22:30:49.745250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.127 22:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.127 [2024-09-27 22:30:49.909886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:54.127 [2024-09-27 22:30:49.910000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 BaseBdev2 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 [ 00:13:54.387 { 00:13:54.387 "name": "BaseBdev2", 00:13:54.387 "aliases": [ 00:13:54.387 "b9a99c3c-ab77-4462-a38f-1aa665addfc1" 00:13:54.387 ], 00:13:54.387 "product_name": "Malloc disk", 00:13:54.387 "block_size": 512, 00:13:54.387 "num_blocks": 65536, 00:13:54.387 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:54.387 "assigned_rate_limits": { 00:13:54.387 "rw_ios_per_sec": 0, 00:13:54.387 "rw_mbytes_per_sec": 0, 00:13:54.387 "r_mbytes_per_sec": 0, 00:13:54.387 "w_mbytes_per_sec": 0 00:13:54.387 }, 00:13:54.387 "claimed": false, 00:13:54.387 "zoned": false, 00:13:54.387 "supported_io_types": { 00:13:54.387 "read": true, 00:13:54.387 "write": true, 00:13:54.387 "unmap": true, 00:13:54.387 "flush": true, 00:13:54.387 "reset": true, 00:13:54.387 "nvme_admin": false, 00:13:54.387 "nvme_io": false, 00:13:54.387 "nvme_io_md": false, 00:13:54.387 "write_zeroes": true, 00:13:54.387 "zcopy": true, 00:13:54.387 "get_zone_info": false, 00:13:54.387 "zone_management": false, 00:13:54.387 "zone_append": false, 00:13:54.387 "compare": false, 00:13:54.387 "compare_and_write": false, 00:13:54.387 "abort": true, 00:13:54.387 "seek_hole": false, 00:13:54.387 "seek_data": false, 00:13:54.387 "copy": true, 00:13:54.387 "nvme_iov_md": false 00:13:54.387 }, 00:13:54.387 "memory_domains": [ 00:13:54.387 { 00:13:54.387 "dma_device_id": "system", 00:13:54.387 "dma_device_type": 1 00:13:54.387 }, 00:13:54.387 { 00:13:54.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.387 "dma_device_type": 2 00:13:54.387 } 00:13:54.387 ], 00:13:54.387 "driver_specific": {} 00:13:54.387 } 00:13:54.387 ] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 BaseBdev3 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.387 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.387 [ 00:13:54.387 { 00:13:54.387 "name": "BaseBdev3", 00:13:54.387 "aliases": [ 00:13:54.387 "19437010-1a88-4307-973e-8a87e61c283a" 00:13:54.387 ], 00:13:54.387 "product_name": "Malloc disk", 00:13:54.387 "block_size": 512, 00:13:54.387 "num_blocks": 65536, 00:13:54.387 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:54.387 "assigned_rate_limits": { 00:13:54.387 "rw_ios_per_sec": 0, 00:13:54.387 "rw_mbytes_per_sec": 0, 00:13:54.387 "r_mbytes_per_sec": 0, 00:13:54.387 "w_mbytes_per_sec": 0 00:13:54.647 }, 00:13:54.647 "claimed": false, 00:13:54.647 "zoned": false, 00:13:54.647 "supported_io_types": { 00:13:54.647 "read": true, 00:13:54.647 "write": true, 00:13:54.647 "unmap": true, 00:13:54.647 "flush": true, 00:13:54.647 "reset": true, 00:13:54.647 "nvme_admin": false, 00:13:54.647 "nvme_io": false, 00:13:54.647 "nvme_io_md": false, 00:13:54.647 "write_zeroes": true, 00:13:54.647 "zcopy": true, 00:13:54.647 "get_zone_info": false, 00:13:54.647 "zone_management": false, 00:13:54.647 "zone_append": false, 00:13:54.647 "compare": false, 00:13:54.647 "compare_and_write": false, 00:13:54.647 "abort": true, 00:13:54.647 "seek_hole": false, 00:13:54.647 "seek_data": false, 00:13:54.647 "copy": true, 00:13:54.647 "nvme_iov_md": false 00:13:54.647 }, 00:13:54.647 "memory_domains": [ 00:13:54.647 { 00:13:54.647 "dma_device_id": "system", 00:13:54.647 "dma_device_type": 1 00:13:54.647 }, 00:13:54.647 { 00:13:54.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.647 "dma_device_type": 2 00:13:54.647 } 00:13:54.647 ], 00:13:54.647 "driver_specific": {} 00:13:54.647 } 00:13:54.647 ] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.647 BaseBdev4 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.647 [ 00:13:54.647 { 00:13:54.647 "name": "BaseBdev4", 00:13:54.647 "aliases": [ 00:13:54.647 "92e8ad97-3e6f-44c6-930e-42dabcef8413" 00:13:54.647 ], 00:13:54.647 "product_name": "Malloc disk", 00:13:54.647 "block_size": 512, 00:13:54.647 "num_blocks": 65536, 00:13:54.647 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:54.647 "assigned_rate_limits": { 00:13:54.647 "rw_ios_per_sec": 0, 00:13:54.647 "rw_mbytes_per_sec": 0, 00:13:54.647 "r_mbytes_per_sec": 0, 00:13:54.647 "w_mbytes_per_sec": 0 00:13:54.647 }, 00:13:54.647 "claimed": false, 00:13:54.647 "zoned": false, 00:13:54.647 "supported_io_types": { 00:13:54.647 "read": true, 00:13:54.647 "write": true, 00:13:54.647 "unmap": true, 00:13:54.647 "flush": true, 00:13:54.647 "reset": true, 00:13:54.647 "nvme_admin": false, 00:13:54.647 "nvme_io": false, 00:13:54.647 "nvme_io_md": false, 00:13:54.647 "write_zeroes": true, 00:13:54.647 "zcopy": true, 00:13:54.647 "get_zone_info": false, 00:13:54.647 "zone_management": false, 00:13:54.647 "zone_append": false, 00:13:54.647 "compare": false, 00:13:54.647 "compare_and_write": false, 00:13:54.647 "abort": true, 00:13:54.647 "seek_hole": false, 00:13:54.647 "seek_data": false, 00:13:54.647 "copy": true, 00:13:54.647 "nvme_iov_md": false 00:13:54.647 }, 00:13:54.647 "memory_domains": [ 00:13:54.647 { 00:13:54.647 "dma_device_id": "system", 00:13:54.647 "dma_device_type": 1 00:13:54.647 }, 00:13:54.647 { 00:13:54.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.647 "dma_device_type": 2 00:13:54.647 } 00:13:54.647 ], 00:13:54.647 "driver_specific": {} 00:13:54.647 } 00:13:54.647 ] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.647 [2024-09-27 22:30:50.397074] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.647 [2024-09-27 22:30:50.397132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.647 [2024-09-27 22:30:50.397161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.647 [2024-09-27 22:30:50.399470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.647 [2024-09-27 22:30:50.399677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.647 "name": "Existed_Raid", 00:13:54.647 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:54.647 "strip_size_kb": 64, 00:13:54.647 "state": "configuring", 00:13:54.647 "raid_level": "concat", 00:13:54.647 "superblock": true, 00:13:54.647 "num_base_bdevs": 4, 00:13:54.647 "num_base_bdevs_discovered": 3, 00:13:54.647 "num_base_bdevs_operational": 4, 00:13:54.647 "base_bdevs_list": [ 00:13:54.647 { 00:13:54.647 "name": "BaseBdev1", 00:13:54.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.648 "is_configured": false, 00:13:54.648 "data_offset": 0, 00:13:54.648 "data_size": 0 00:13:54.648 }, 00:13:54.648 { 00:13:54.648 "name": "BaseBdev2", 00:13:54.648 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:54.648 "is_configured": true, 00:13:54.648 "data_offset": 2048, 00:13:54.648 "data_size": 63488 00:13:54.648 }, 00:13:54.648 { 00:13:54.648 "name": "BaseBdev3", 00:13:54.648 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:54.648 "is_configured": true, 00:13:54.648 "data_offset": 2048, 00:13:54.648 "data_size": 63488 00:13:54.648 }, 00:13:54.648 { 00:13:54.648 "name": "BaseBdev4", 00:13:54.648 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:54.648 "is_configured": true, 00:13:54.648 "data_offset": 2048, 00:13:54.648 "data_size": 63488 00:13:54.648 } 00:13:54.648 ] 00:13:54.648 }' 00:13:54.648 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.648 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.968 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:54.968 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.968 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.968 [2024-09-27 22:30:50.840418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.228 "name": "Existed_Raid", 00:13:55.228 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:55.228 "strip_size_kb": 64, 00:13:55.228 "state": "configuring", 00:13:55.228 "raid_level": "concat", 00:13:55.228 "superblock": true, 00:13:55.228 "num_base_bdevs": 4, 00:13:55.228 "num_base_bdevs_discovered": 2, 00:13:55.228 "num_base_bdevs_operational": 4, 00:13:55.228 "base_bdevs_list": [ 00:13:55.228 { 00:13:55.228 "name": "BaseBdev1", 00:13:55.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.228 "is_configured": false, 00:13:55.228 "data_offset": 0, 00:13:55.228 "data_size": 0 00:13:55.228 }, 00:13:55.228 { 00:13:55.228 "name": null, 00:13:55.228 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:55.228 "is_configured": false, 00:13:55.228 "data_offset": 0, 00:13:55.228 "data_size": 63488 00:13:55.228 }, 00:13:55.228 { 00:13:55.228 "name": "BaseBdev3", 00:13:55.228 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:55.228 "is_configured": true, 00:13:55.228 "data_offset": 2048, 00:13:55.228 "data_size": 63488 00:13:55.228 }, 00:13:55.228 { 00:13:55.228 "name": "BaseBdev4", 00:13:55.228 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:55.228 "is_configured": true, 00:13:55.228 "data_offset": 2048, 00:13:55.228 "data_size": 63488 00:13:55.228 } 00:13:55.228 ] 00:13:55.228 }' 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.228 22:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.487 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 [2024-09-27 22:30:51.373707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.746 BaseBdev1 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 [ 00:13:55.746 { 00:13:55.746 "name": "BaseBdev1", 00:13:55.746 "aliases": [ 00:13:55.746 "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06" 00:13:55.746 ], 00:13:55.746 "product_name": "Malloc disk", 00:13:55.746 "block_size": 512, 00:13:55.746 "num_blocks": 65536, 00:13:55.746 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:55.746 "assigned_rate_limits": { 00:13:55.746 "rw_ios_per_sec": 0, 00:13:55.746 "rw_mbytes_per_sec": 0, 00:13:55.746 "r_mbytes_per_sec": 0, 00:13:55.746 "w_mbytes_per_sec": 0 00:13:55.746 }, 00:13:55.746 "claimed": true, 00:13:55.746 "claim_type": "exclusive_write", 00:13:55.746 "zoned": false, 00:13:55.746 "supported_io_types": { 00:13:55.746 "read": true, 00:13:55.746 "write": true, 00:13:55.746 "unmap": true, 00:13:55.746 "flush": true, 00:13:55.746 "reset": true, 00:13:55.746 "nvme_admin": false, 00:13:55.746 "nvme_io": false, 00:13:55.746 "nvme_io_md": false, 00:13:55.746 "write_zeroes": true, 00:13:55.746 "zcopy": true, 00:13:55.746 "get_zone_info": false, 00:13:55.746 "zone_management": false, 00:13:55.746 "zone_append": false, 00:13:55.746 "compare": false, 00:13:55.746 "compare_and_write": false, 00:13:55.746 "abort": true, 00:13:55.746 "seek_hole": false, 00:13:55.746 "seek_data": false, 00:13:55.746 "copy": true, 00:13:55.746 "nvme_iov_md": false 00:13:55.746 }, 00:13:55.746 "memory_domains": [ 00:13:55.746 { 00:13:55.746 "dma_device_id": "system", 00:13:55.746 "dma_device_type": 1 00:13:55.746 }, 00:13:55.746 { 00:13:55.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.746 "dma_device_type": 2 00:13:55.746 } 00:13:55.746 ], 00:13:55.746 "driver_specific": {} 00:13:55.746 } 00:13:55.746 ] 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.746 "name": "Existed_Raid", 00:13:55.746 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:55.746 "strip_size_kb": 64, 00:13:55.746 "state": "configuring", 00:13:55.746 "raid_level": "concat", 00:13:55.746 "superblock": true, 00:13:55.746 "num_base_bdevs": 4, 00:13:55.746 "num_base_bdevs_discovered": 3, 00:13:55.746 "num_base_bdevs_operational": 4, 00:13:55.746 "base_bdevs_list": [ 00:13:55.746 { 00:13:55.746 "name": "BaseBdev1", 00:13:55.746 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:55.746 "is_configured": true, 00:13:55.746 "data_offset": 2048, 00:13:55.746 "data_size": 63488 00:13:55.746 }, 00:13:55.746 { 00:13:55.746 "name": null, 00:13:55.746 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:55.746 "is_configured": false, 00:13:55.746 "data_offset": 0, 00:13:55.746 "data_size": 63488 00:13:55.746 }, 00:13:55.746 { 00:13:55.746 "name": "BaseBdev3", 00:13:55.746 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:55.746 "is_configured": true, 00:13:55.746 "data_offset": 2048, 00:13:55.746 "data_size": 63488 00:13:55.746 }, 00:13:55.746 { 00:13:55.746 "name": "BaseBdev4", 00:13:55.746 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:55.746 "is_configured": true, 00:13:55.746 "data_offset": 2048, 00:13:55.746 "data_size": 63488 00:13:55.746 } 00:13:55.746 ] 00:13:55.746 }' 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.746 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.315 [2024-09-27 22:30:51.953026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.315 22:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.315 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.315 "name": "Existed_Raid", 00:13:56.315 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:56.315 "strip_size_kb": 64, 00:13:56.315 "state": "configuring", 00:13:56.315 "raid_level": "concat", 00:13:56.315 "superblock": true, 00:13:56.315 "num_base_bdevs": 4, 00:13:56.315 "num_base_bdevs_discovered": 2, 00:13:56.315 "num_base_bdevs_operational": 4, 00:13:56.315 "base_bdevs_list": [ 00:13:56.315 { 00:13:56.315 "name": "BaseBdev1", 00:13:56.315 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:56.315 "is_configured": true, 00:13:56.315 "data_offset": 2048, 00:13:56.315 "data_size": 63488 00:13:56.315 }, 00:13:56.315 { 00:13:56.315 "name": null, 00:13:56.315 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:56.315 "is_configured": false, 00:13:56.315 "data_offset": 0, 00:13:56.315 "data_size": 63488 00:13:56.315 }, 00:13:56.315 { 00:13:56.315 "name": null, 00:13:56.315 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:56.315 "is_configured": false, 00:13:56.315 "data_offset": 0, 00:13:56.315 "data_size": 63488 00:13:56.315 }, 00:13:56.315 { 00:13:56.315 "name": "BaseBdev4", 00:13:56.315 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:56.315 "is_configured": true, 00:13:56.315 "data_offset": 2048, 00:13:56.315 "data_size": 63488 00:13:56.315 } 00:13:56.315 ] 00:13:56.315 }' 00:13:56.315 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.315 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.574 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:56.574 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.574 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.574 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.574 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.832 [2024-09-27 22:30:52.472351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.832 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.833 "name": "Existed_Raid", 00:13:56.833 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:56.833 "strip_size_kb": 64, 00:13:56.833 "state": "configuring", 00:13:56.833 "raid_level": "concat", 00:13:56.833 "superblock": true, 00:13:56.833 "num_base_bdevs": 4, 00:13:56.833 "num_base_bdevs_discovered": 3, 00:13:56.833 "num_base_bdevs_operational": 4, 00:13:56.833 "base_bdevs_list": [ 00:13:56.833 { 00:13:56.833 "name": "BaseBdev1", 00:13:56.833 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:56.833 "is_configured": true, 00:13:56.833 "data_offset": 2048, 00:13:56.833 "data_size": 63488 00:13:56.833 }, 00:13:56.833 { 00:13:56.833 "name": null, 00:13:56.833 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:56.833 "is_configured": false, 00:13:56.833 "data_offset": 0, 00:13:56.833 "data_size": 63488 00:13:56.833 }, 00:13:56.833 { 00:13:56.833 "name": "BaseBdev3", 00:13:56.833 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:56.833 "is_configured": true, 00:13:56.833 "data_offset": 2048, 00:13:56.833 "data_size": 63488 00:13:56.833 }, 00:13:56.833 { 00:13:56.833 "name": "BaseBdev4", 00:13:56.833 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:56.833 "is_configured": true, 00:13:56.833 "data_offset": 2048, 00:13:56.833 "data_size": 63488 00:13:56.833 } 00:13:56.833 ] 00:13:56.833 }' 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.833 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.106 22:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.106 [2024-09-27 22:30:52.975656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.373 "name": "Existed_Raid", 00:13:57.373 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:57.373 "strip_size_kb": 64, 00:13:57.373 "state": "configuring", 00:13:57.373 "raid_level": "concat", 00:13:57.373 "superblock": true, 00:13:57.373 "num_base_bdevs": 4, 00:13:57.373 "num_base_bdevs_discovered": 2, 00:13:57.373 "num_base_bdevs_operational": 4, 00:13:57.373 "base_bdevs_list": [ 00:13:57.373 { 00:13:57.373 "name": null, 00:13:57.373 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:57.373 "is_configured": false, 00:13:57.373 "data_offset": 0, 00:13:57.373 "data_size": 63488 00:13:57.373 }, 00:13:57.373 { 00:13:57.373 "name": null, 00:13:57.373 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:57.373 "is_configured": false, 00:13:57.373 "data_offset": 0, 00:13:57.373 "data_size": 63488 00:13:57.373 }, 00:13:57.373 { 00:13:57.373 "name": "BaseBdev3", 00:13:57.373 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:57.373 "is_configured": true, 00:13:57.373 "data_offset": 2048, 00:13:57.373 "data_size": 63488 00:13:57.373 }, 00:13:57.373 { 00:13:57.373 "name": "BaseBdev4", 00:13:57.373 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:57.373 "is_configured": true, 00:13:57.373 "data_offset": 2048, 00:13:57.373 "data_size": 63488 00:13:57.373 } 00:13:57.373 ] 00:13:57.373 }' 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.373 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.940 [2024-09-27 22:30:53.574286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.940 "name": "Existed_Raid", 00:13:57.940 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:57.940 "strip_size_kb": 64, 00:13:57.940 "state": "configuring", 00:13:57.940 "raid_level": "concat", 00:13:57.940 "superblock": true, 00:13:57.940 "num_base_bdevs": 4, 00:13:57.940 "num_base_bdevs_discovered": 3, 00:13:57.940 "num_base_bdevs_operational": 4, 00:13:57.940 "base_bdevs_list": [ 00:13:57.940 { 00:13:57.940 "name": null, 00:13:57.940 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:57.940 "is_configured": false, 00:13:57.940 "data_offset": 0, 00:13:57.940 "data_size": 63488 00:13:57.940 }, 00:13:57.940 { 00:13:57.940 "name": "BaseBdev2", 00:13:57.940 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:57.940 "is_configured": true, 00:13:57.940 "data_offset": 2048, 00:13:57.940 "data_size": 63488 00:13:57.940 }, 00:13:57.940 { 00:13:57.940 "name": "BaseBdev3", 00:13:57.940 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:57.940 "is_configured": true, 00:13:57.940 "data_offset": 2048, 00:13:57.940 "data_size": 63488 00:13:57.940 }, 00:13:57.940 { 00:13:57.940 "name": "BaseBdev4", 00:13:57.940 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:57.940 "is_configured": true, 00:13:57.940 "data_offset": 2048, 00:13:57.940 "data_size": 63488 00:13:57.940 } 00:13:57.940 ] 00:13:57.940 }' 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.940 22:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bfd8efb4-fa2b-48ce-b1a4-b28208b23a06 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.228 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.487 [2024-09-27 22:30:54.140221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:58.487 [2024-09-27 22:30:54.140488] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:58.487 [2024-09-27 22:30:54.140503] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:58.487 [2024-09-27 22:30:54.140797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:58.487 NewBaseBdev 00:13:58.487 [2024-09-27 22:30:54.140942] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:58.487 [2024-09-27 22:30:54.140956] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:58.487 [2024-09-27 22:30:54.141119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.487 [ 00:13:58.487 { 00:13:58.487 "name": "NewBaseBdev", 00:13:58.487 "aliases": [ 00:13:58.487 "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06" 00:13:58.487 ], 00:13:58.487 "product_name": "Malloc disk", 00:13:58.487 "block_size": 512, 00:13:58.487 "num_blocks": 65536, 00:13:58.487 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:58.487 "assigned_rate_limits": { 00:13:58.487 "rw_ios_per_sec": 0, 00:13:58.487 "rw_mbytes_per_sec": 0, 00:13:58.487 "r_mbytes_per_sec": 0, 00:13:58.487 "w_mbytes_per_sec": 0 00:13:58.487 }, 00:13:58.487 "claimed": true, 00:13:58.487 "claim_type": "exclusive_write", 00:13:58.487 "zoned": false, 00:13:58.487 "supported_io_types": { 00:13:58.487 "read": true, 00:13:58.487 "write": true, 00:13:58.487 "unmap": true, 00:13:58.487 "flush": true, 00:13:58.487 "reset": true, 00:13:58.487 "nvme_admin": false, 00:13:58.487 "nvme_io": false, 00:13:58.487 "nvme_io_md": false, 00:13:58.487 "write_zeroes": true, 00:13:58.487 "zcopy": true, 00:13:58.487 "get_zone_info": false, 00:13:58.487 "zone_management": false, 00:13:58.487 "zone_append": false, 00:13:58.487 "compare": false, 00:13:58.487 "compare_and_write": false, 00:13:58.487 "abort": true, 00:13:58.487 "seek_hole": false, 00:13:58.487 "seek_data": false, 00:13:58.487 "copy": true, 00:13:58.487 "nvme_iov_md": false 00:13:58.487 }, 00:13:58.487 "memory_domains": [ 00:13:58.487 { 00:13:58.487 "dma_device_id": "system", 00:13:58.487 "dma_device_type": 1 00:13:58.487 }, 00:13:58.487 { 00:13:58.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.487 "dma_device_type": 2 00:13:58.487 } 00:13:58.487 ], 00:13:58.487 "driver_specific": {} 00:13:58.487 } 00:13:58.487 ] 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.487 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.488 "name": "Existed_Raid", 00:13:58.488 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:58.488 "strip_size_kb": 64, 00:13:58.488 "state": "online", 00:13:58.488 "raid_level": "concat", 00:13:58.488 "superblock": true, 00:13:58.488 "num_base_bdevs": 4, 00:13:58.488 "num_base_bdevs_discovered": 4, 00:13:58.488 "num_base_bdevs_operational": 4, 00:13:58.488 "base_bdevs_list": [ 00:13:58.488 { 00:13:58.488 "name": "NewBaseBdev", 00:13:58.488 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:58.488 "is_configured": true, 00:13:58.488 "data_offset": 2048, 00:13:58.488 "data_size": 63488 00:13:58.488 }, 00:13:58.488 { 00:13:58.488 "name": "BaseBdev2", 00:13:58.488 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:58.488 "is_configured": true, 00:13:58.488 "data_offset": 2048, 00:13:58.488 "data_size": 63488 00:13:58.488 }, 00:13:58.488 { 00:13:58.488 "name": "BaseBdev3", 00:13:58.488 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:58.488 "is_configured": true, 00:13:58.488 "data_offset": 2048, 00:13:58.488 "data_size": 63488 00:13:58.488 }, 00:13:58.488 { 00:13:58.488 "name": "BaseBdev4", 00:13:58.488 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:58.488 "is_configured": true, 00:13:58.488 "data_offset": 2048, 00:13:58.488 "data_size": 63488 00:13:58.488 } 00:13:58.488 ] 00:13:58.488 }' 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.488 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.745 [2024-09-27 22:30:54.584123] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.745 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.003 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.003 "name": "Existed_Raid", 00:13:59.003 "aliases": [ 00:13:59.003 "cc24b8e9-8238-4bb3-81d3-c6fe70202d37" 00:13:59.003 ], 00:13:59.003 "product_name": "Raid Volume", 00:13:59.003 "block_size": 512, 00:13:59.003 "num_blocks": 253952, 00:13:59.003 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:59.003 "assigned_rate_limits": { 00:13:59.003 "rw_ios_per_sec": 0, 00:13:59.003 "rw_mbytes_per_sec": 0, 00:13:59.003 "r_mbytes_per_sec": 0, 00:13:59.003 "w_mbytes_per_sec": 0 00:13:59.003 }, 00:13:59.003 "claimed": false, 00:13:59.003 "zoned": false, 00:13:59.003 "supported_io_types": { 00:13:59.003 "read": true, 00:13:59.003 "write": true, 00:13:59.003 "unmap": true, 00:13:59.003 "flush": true, 00:13:59.003 "reset": true, 00:13:59.003 "nvme_admin": false, 00:13:59.003 "nvme_io": false, 00:13:59.003 "nvme_io_md": false, 00:13:59.003 "write_zeroes": true, 00:13:59.003 "zcopy": false, 00:13:59.003 "get_zone_info": false, 00:13:59.003 "zone_management": false, 00:13:59.003 "zone_append": false, 00:13:59.003 "compare": false, 00:13:59.003 "compare_and_write": false, 00:13:59.003 "abort": false, 00:13:59.003 "seek_hole": false, 00:13:59.003 "seek_data": false, 00:13:59.003 "copy": false, 00:13:59.003 "nvme_iov_md": false 00:13:59.003 }, 00:13:59.003 "memory_domains": [ 00:13:59.003 { 00:13:59.003 "dma_device_id": "system", 00:13:59.003 "dma_device_type": 1 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.003 "dma_device_type": 2 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "system", 00:13:59.003 "dma_device_type": 1 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.003 "dma_device_type": 2 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "system", 00:13:59.003 "dma_device_type": 1 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.003 "dma_device_type": 2 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "system", 00:13:59.003 "dma_device_type": 1 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.003 "dma_device_type": 2 00:13:59.003 } 00:13:59.003 ], 00:13:59.003 "driver_specific": { 00:13:59.003 "raid": { 00:13:59.003 "uuid": "cc24b8e9-8238-4bb3-81d3-c6fe70202d37", 00:13:59.003 "strip_size_kb": 64, 00:13:59.003 "state": "online", 00:13:59.003 "raid_level": "concat", 00:13:59.003 "superblock": true, 00:13:59.003 "num_base_bdevs": 4, 00:13:59.003 "num_base_bdevs_discovered": 4, 00:13:59.003 "num_base_bdevs_operational": 4, 00:13:59.003 "base_bdevs_list": [ 00:13:59.003 { 00:13:59.003 "name": "NewBaseBdev", 00:13:59.003 "uuid": "bfd8efb4-fa2b-48ce-b1a4-b28208b23a06", 00:13:59.003 "is_configured": true, 00:13:59.003 "data_offset": 2048, 00:13:59.003 "data_size": 63488 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "name": "BaseBdev2", 00:13:59.003 "uuid": "b9a99c3c-ab77-4462-a38f-1aa665addfc1", 00:13:59.003 "is_configured": true, 00:13:59.003 "data_offset": 2048, 00:13:59.003 "data_size": 63488 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "name": "BaseBdev3", 00:13:59.003 "uuid": "19437010-1a88-4307-973e-8a87e61c283a", 00:13:59.003 "is_configured": true, 00:13:59.003 "data_offset": 2048, 00:13:59.003 "data_size": 63488 00:13:59.003 }, 00:13:59.003 { 00:13:59.003 "name": "BaseBdev4", 00:13:59.003 "uuid": "92e8ad97-3e6f-44c6-930e-42dabcef8413", 00:13:59.003 "is_configured": true, 00:13:59.003 "data_offset": 2048, 00:13:59.003 "data_size": 63488 00:13:59.003 } 00:13:59.003 ] 00:13:59.003 } 00:13:59.003 } 00:13:59.003 }' 00:13:59.003 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.003 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:59.003 BaseBdev2 00:13:59.003 BaseBdev3 00:13:59.003 BaseBdev4' 00:13:59.003 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.003 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.003 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.004 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.262 [2024-09-27 22:30:54.903589] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.262 [2024-09-27 22:30:54.903628] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.262 [2024-09-27 22:30:54.903713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.262 [2024-09-27 22:30:54.903789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.262 [2024-09-27 22:30:54.903803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72688 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72688 ']' 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72688 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72688 00:13:59.262 killing process with pid 72688 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72688' 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72688 00:13:59.262 [2024-09-27 22:30:54.955937] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.262 22:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72688 00:13:59.827 [2024-09-27 22:30:55.400635] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.731 22:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:01.731 00:14:01.731 real 0m13.380s 00:14:01.731 user 0m20.384s 00:14:01.731 sys 0m2.591s 00:14:01.731 22:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.731 22:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.731 ************************************ 00:14:01.731 END TEST raid_state_function_test_sb 00:14:01.731 ************************************ 00:14:01.989 22:30:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:01.989 22:30:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:01.989 22:30:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.989 22:30:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.989 ************************************ 00:14:01.989 START TEST raid_superblock_test 00:14:01.989 ************************************ 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73375 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73375 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73375 ']' 00:14:01.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.989 22:30:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.989 [2024-09-27 22:30:57.755698] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:01.989 [2024-09-27 22:30:57.755841] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73375 ] 00:14:02.247 [2024-09-27 22:30:57.932924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.505 [2024-09-27 22:30:58.188427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.763 [2024-09-27 22:30:58.449577] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.763 [2024-09-27 22:30:58.449620] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.331 22:30:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 malloc1 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 [2024-09-27 22:30:59.027087] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:03.331 [2024-09-27 22:30:59.027191] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.331 [2024-09-27 22:30:59.027222] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:03.331 [2024-09-27 22:30:59.027239] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.331 [2024-09-27 22:30:59.030026] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.331 [2024-09-27 22:30:59.030072] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:03.331 pt1 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 malloc2 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 [2024-09-27 22:30:59.096843] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:03.331 [2024-09-27 22:30:59.097139] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.331 [2024-09-27 22:30:59.097215] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:03.331 [2024-09-27 22:30:59.097295] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.331 [2024-09-27 22:30:59.100068] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.331 [2024-09-27 22:30:59.100224] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:03.331 pt2 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 malloc3 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.331 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 [2024-09-27 22:30:59.166657] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:03.331 [2024-09-27 22:30:59.166735] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.331 [2024-09-27 22:30:59.166764] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:03.332 [2024-09-27 22:30:59.166777] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.332 [2024-09-27 22:30:59.169465] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.332 [2024-09-27 22:30:59.169515] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:03.332 pt3 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.332 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.590 malloc4 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.590 [2024-09-27 22:30:59.235225] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:03.590 [2024-09-27 22:30:59.235446] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.590 [2024-09-27 22:30:59.235512] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:03.590 [2024-09-27 22:30:59.235595] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.590 [2024-09-27 22:30:59.238392] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.590 [2024-09-27 22:30:59.238563] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:03.590 pt4 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.590 [2024-09-27 22:30:59.247387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:03.590 [2024-09-27 22:30:59.249727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:03.590 [2024-09-27 22:30:59.249808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:03.590 [2024-09-27 22:30:59.249880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:03.590 [2024-09-27 22:30:59.250125] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:03.590 [2024-09-27 22:30:59.250146] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:03.590 [2024-09-27 22:30:59.250466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:03.590 [2024-09-27 22:30:59.250661] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:03.590 [2024-09-27 22:30:59.250677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:03.590 [2024-09-27 22:30:59.250867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.590 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.590 "name": "raid_bdev1", 00:14:03.590 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:03.591 "strip_size_kb": 64, 00:14:03.591 "state": "online", 00:14:03.591 "raid_level": "concat", 00:14:03.591 "superblock": true, 00:14:03.591 "num_base_bdevs": 4, 00:14:03.591 "num_base_bdevs_discovered": 4, 00:14:03.591 "num_base_bdevs_operational": 4, 00:14:03.591 "base_bdevs_list": [ 00:14:03.591 { 00:14:03.591 "name": "pt1", 00:14:03.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:03.591 "is_configured": true, 00:14:03.591 "data_offset": 2048, 00:14:03.591 "data_size": 63488 00:14:03.591 }, 00:14:03.591 { 00:14:03.591 "name": "pt2", 00:14:03.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.591 "is_configured": true, 00:14:03.591 "data_offset": 2048, 00:14:03.591 "data_size": 63488 00:14:03.591 }, 00:14:03.591 { 00:14:03.591 "name": "pt3", 00:14:03.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.591 "is_configured": true, 00:14:03.591 "data_offset": 2048, 00:14:03.591 "data_size": 63488 00:14:03.591 }, 00:14:03.591 { 00:14:03.591 "name": "pt4", 00:14:03.591 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.591 "is_configured": true, 00:14:03.591 "data_offset": 2048, 00:14:03.591 "data_size": 63488 00:14:03.591 } 00:14:03.591 ] 00:14:03.591 }' 00:14:03.591 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.591 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.848 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.107 [2024-09-27 22:30:59.730964] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.107 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.107 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:04.107 "name": "raid_bdev1", 00:14:04.107 "aliases": [ 00:14:04.107 "d77e9069-14ac-49c2-8e98-6387d7e2f4ca" 00:14:04.107 ], 00:14:04.107 "product_name": "Raid Volume", 00:14:04.107 "block_size": 512, 00:14:04.107 "num_blocks": 253952, 00:14:04.107 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:04.107 "assigned_rate_limits": { 00:14:04.107 "rw_ios_per_sec": 0, 00:14:04.107 "rw_mbytes_per_sec": 0, 00:14:04.107 "r_mbytes_per_sec": 0, 00:14:04.107 "w_mbytes_per_sec": 0 00:14:04.107 }, 00:14:04.107 "claimed": false, 00:14:04.107 "zoned": false, 00:14:04.107 "supported_io_types": { 00:14:04.107 "read": true, 00:14:04.107 "write": true, 00:14:04.107 "unmap": true, 00:14:04.107 "flush": true, 00:14:04.107 "reset": true, 00:14:04.107 "nvme_admin": false, 00:14:04.107 "nvme_io": false, 00:14:04.107 "nvme_io_md": false, 00:14:04.107 "write_zeroes": true, 00:14:04.107 "zcopy": false, 00:14:04.107 "get_zone_info": false, 00:14:04.107 "zone_management": false, 00:14:04.107 "zone_append": false, 00:14:04.107 "compare": false, 00:14:04.107 "compare_and_write": false, 00:14:04.107 "abort": false, 00:14:04.107 "seek_hole": false, 00:14:04.107 "seek_data": false, 00:14:04.107 "copy": false, 00:14:04.107 "nvme_iov_md": false 00:14:04.107 }, 00:14:04.107 "memory_domains": [ 00:14:04.107 { 00:14:04.108 "dma_device_id": "system", 00:14:04.108 "dma_device_type": 1 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.108 "dma_device_type": 2 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "system", 00:14:04.108 "dma_device_type": 1 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.108 "dma_device_type": 2 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "system", 00:14:04.108 "dma_device_type": 1 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.108 "dma_device_type": 2 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "system", 00:14:04.108 "dma_device_type": 1 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.108 "dma_device_type": 2 00:14:04.108 } 00:14:04.108 ], 00:14:04.108 "driver_specific": { 00:14:04.108 "raid": { 00:14:04.108 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:04.108 "strip_size_kb": 64, 00:14:04.108 "state": "online", 00:14:04.108 "raid_level": "concat", 00:14:04.108 "superblock": true, 00:14:04.108 "num_base_bdevs": 4, 00:14:04.108 "num_base_bdevs_discovered": 4, 00:14:04.108 "num_base_bdevs_operational": 4, 00:14:04.108 "base_bdevs_list": [ 00:14:04.108 { 00:14:04.108 "name": "pt1", 00:14:04.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "pt2", 00:14:04.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "pt3", 00:14:04.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "pt4", 00:14:04.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 2048, 00:14:04.108 "data_size": 63488 00:14:04.108 } 00:14:04.108 ] 00:14:04.108 } 00:14:04.108 } 00:14:04.108 }' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:04.108 pt2 00:14:04.108 pt3 00:14:04.108 pt4' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.108 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:30:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:04.368 [2024-09-27 22:31:00.050492] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d77e9069-14ac-49c2-8e98-6387d7e2f4ca 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d77e9069-14ac-49c2-8e98-6387d7e2f4ca ']' 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 [2024-09-27 22:31:00.094130] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.368 [2024-09-27 22:31:00.094172] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.368 [2024-09-27 22:31:00.094263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.368 [2024-09-27 22:31:00.094339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.368 [2024-09-27 22:31:00.094361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.368 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.627 [2024-09-27 22:31:00.261898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:04.627 [2024-09-27 22:31:00.264395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:04.627 [2024-09-27 22:31:00.264456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:04.627 [2024-09-27 22:31:00.264494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:04.627 [2024-09-27 22:31:00.264551] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:04.627 [2024-09-27 22:31:00.264612] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:04.627 [2024-09-27 22:31:00.264636] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:04.627 [2024-09-27 22:31:00.264660] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:04.627 [2024-09-27 22:31:00.264677] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.627 [2024-09-27 22:31:00.264692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:04.627 request: 00:14:04.627 { 00:14:04.627 "name": "raid_bdev1", 00:14:04.627 "raid_level": "concat", 00:14:04.627 "base_bdevs": [ 00:14:04.627 "malloc1", 00:14:04.627 "malloc2", 00:14:04.627 "malloc3", 00:14:04.627 "malloc4" 00:14:04.627 ], 00:14:04.627 "strip_size_kb": 64, 00:14:04.627 "superblock": false, 00:14:04.627 "method": "bdev_raid_create", 00:14:04.627 "req_id": 1 00:14:04.627 } 00:14:04.627 Got JSON-RPC error response 00:14:04.627 response: 00:14:04.627 { 00:14:04.627 "code": -17, 00:14:04.627 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:04.627 } 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.627 [2024-09-27 22:31:00.329786] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.627 [2024-09-27 22:31:00.329870] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.627 [2024-09-27 22:31:00.329895] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:04.627 [2024-09-27 22:31:00.329911] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.627 [2024-09-27 22:31:00.332630] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.627 [2024-09-27 22:31:00.332691] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.627 [2024-09-27 22:31:00.332789] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:04.627 [2024-09-27 22:31:00.332860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.627 pt1 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.627 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.628 "name": "raid_bdev1", 00:14:04.628 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:04.628 "strip_size_kb": 64, 00:14:04.628 "state": "configuring", 00:14:04.628 "raid_level": "concat", 00:14:04.628 "superblock": true, 00:14:04.628 "num_base_bdevs": 4, 00:14:04.628 "num_base_bdevs_discovered": 1, 00:14:04.628 "num_base_bdevs_operational": 4, 00:14:04.628 "base_bdevs_list": [ 00:14:04.628 { 00:14:04.628 "name": "pt1", 00:14:04.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.628 "is_configured": true, 00:14:04.628 "data_offset": 2048, 00:14:04.628 "data_size": 63488 00:14:04.628 }, 00:14:04.628 { 00:14:04.628 "name": null, 00:14:04.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.628 "is_configured": false, 00:14:04.628 "data_offset": 2048, 00:14:04.628 "data_size": 63488 00:14:04.628 }, 00:14:04.628 { 00:14:04.628 "name": null, 00:14:04.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.628 "is_configured": false, 00:14:04.628 "data_offset": 2048, 00:14:04.628 "data_size": 63488 00:14:04.628 }, 00:14:04.628 { 00:14:04.628 "name": null, 00:14:04.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.628 "is_configured": false, 00:14:04.628 "data_offset": 2048, 00:14:04.628 "data_size": 63488 00:14:04.628 } 00:14:04.628 ] 00:14:04.628 }' 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.628 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.885 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:04.885 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.885 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.885 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.885 [2024-09-27 22:31:00.761190] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.885 [2024-09-27 22:31:00.761274] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.885 [2024-09-27 22:31:00.761299] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:04.885 [2024-09-27 22:31:00.761314] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.885 [2024-09-27 22:31:00.761825] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.885 [2024-09-27 22:31:00.761848] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.885 [2024-09-27 22:31:00.761937] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:04.885 [2024-09-27 22:31:00.761963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.143 pt2 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.143 [2024-09-27 22:31:00.773237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.143 "name": "raid_bdev1", 00:14:05.143 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:05.143 "strip_size_kb": 64, 00:14:05.143 "state": "configuring", 00:14:05.143 "raid_level": "concat", 00:14:05.143 "superblock": true, 00:14:05.143 "num_base_bdevs": 4, 00:14:05.143 "num_base_bdevs_discovered": 1, 00:14:05.143 "num_base_bdevs_operational": 4, 00:14:05.143 "base_bdevs_list": [ 00:14:05.143 { 00:14:05.143 "name": "pt1", 00:14:05.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.143 "is_configured": true, 00:14:05.143 "data_offset": 2048, 00:14:05.143 "data_size": 63488 00:14:05.143 }, 00:14:05.143 { 00:14:05.143 "name": null, 00:14:05.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.143 "is_configured": false, 00:14:05.143 "data_offset": 0, 00:14:05.143 "data_size": 63488 00:14:05.143 }, 00:14:05.143 { 00:14:05.143 "name": null, 00:14:05.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.143 "is_configured": false, 00:14:05.143 "data_offset": 2048, 00:14:05.143 "data_size": 63488 00:14:05.143 }, 00:14:05.143 { 00:14:05.143 "name": null, 00:14:05.143 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.143 "is_configured": false, 00:14:05.143 "data_offset": 2048, 00:14:05.143 "data_size": 63488 00:14:05.143 } 00:14:05.143 ] 00:14:05.143 }' 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.143 22:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.402 [2024-09-27 22:31:01.228560] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.402 [2024-09-27 22:31:01.228641] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.402 [2024-09-27 22:31:01.228666] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:05.402 [2024-09-27 22:31:01.228679] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.402 [2024-09-27 22:31:01.229202] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.402 [2024-09-27 22:31:01.229226] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.402 [2024-09-27 22:31:01.229319] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:05.402 [2024-09-27 22:31:01.229347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.402 pt2 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.402 [2024-09-27 22:31:01.240554] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:05.402 [2024-09-27 22:31:01.240810] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.402 [2024-09-27 22:31:01.240844] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:05.402 [2024-09-27 22:31:01.240857] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.402 [2024-09-27 22:31:01.241360] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.402 [2024-09-27 22:31:01.241381] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:05.402 [2024-09-27 22:31:01.241471] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:05.402 [2024-09-27 22:31:01.241501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.402 pt3 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.402 [2024-09-27 22:31:01.252502] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:05.402 [2024-09-27 22:31:01.252585] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.402 [2024-09-27 22:31:01.252619] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:05.402 [2024-09-27 22:31:01.252632] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.402 [2024-09-27 22:31:01.253165] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.402 [2024-09-27 22:31:01.253184] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:05.402 [2024-09-27 22:31:01.253286] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:05.402 [2024-09-27 22:31:01.253316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:05.402 [2024-09-27 22:31:01.253484] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.402 [2024-09-27 22:31:01.253494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:05.402 [2024-09-27 22:31:01.253757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:05.402 [2024-09-27 22:31:01.253919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.402 [2024-09-27 22:31:01.253940] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:05.402 [2024-09-27 22:31:01.254098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.402 pt4 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.402 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.660 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.660 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.660 "name": "raid_bdev1", 00:14:05.660 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:05.660 "strip_size_kb": 64, 00:14:05.660 "state": "online", 00:14:05.660 "raid_level": "concat", 00:14:05.660 "superblock": true, 00:14:05.660 "num_base_bdevs": 4, 00:14:05.660 "num_base_bdevs_discovered": 4, 00:14:05.660 "num_base_bdevs_operational": 4, 00:14:05.660 "base_bdevs_list": [ 00:14:05.660 { 00:14:05.660 "name": "pt1", 00:14:05.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.660 "is_configured": true, 00:14:05.660 "data_offset": 2048, 00:14:05.660 "data_size": 63488 00:14:05.660 }, 00:14:05.660 { 00:14:05.660 "name": "pt2", 00:14:05.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.660 "is_configured": true, 00:14:05.660 "data_offset": 2048, 00:14:05.660 "data_size": 63488 00:14:05.660 }, 00:14:05.660 { 00:14:05.660 "name": "pt3", 00:14:05.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.660 "is_configured": true, 00:14:05.660 "data_offset": 2048, 00:14:05.660 "data_size": 63488 00:14:05.660 }, 00:14:05.660 { 00:14:05.660 "name": "pt4", 00:14:05.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.660 "is_configured": true, 00:14:05.660 "data_offset": 2048, 00:14:05.660 "data_size": 63488 00:14:05.660 } 00:14:05.660 ] 00:14:05.660 }' 00:14:05.660 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.660 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.918 [2024-09-27 22:31:01.736280] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.918 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.918 "name": "raid_bdev1", 00:14:05.918 "aliases": [ 00:14:05.918 "d77e9069-14ac-49c2-8e98-6387d7e2f4ca" 00:14:05.918 ], 00:14:05.918 "product_name": "Raid Volume", 00:14:05.918 "block_size": 512, 00:14:05.918 "num_blocks": 253952, 00:14:05.918 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:05.918 "assigned_rate_limits": { 00:14:05.918 "rw_ios_per_sec": 0, 00:14:05.918 "rw_mbytes_per_sec": 0, 00:14:05.918 "r_mbytes_per_sec": 0, 00:14:05.918 "w_mbytes_per_sec": 0 00:14:05.918 }, 00:14:05.918 "claimed": false, 00:14:05.918 "zoned": false, 00:14:05.918 "supported_io_types": { 00:14:05.918 "read": true, 00:14:05.918 "write": true, 00:14:05.918 "unmap": true, 00:14:05.918 "flush": true, 00:14:05.918 "reset": true, 00:14:05.918 "nvme_admin": false, 00:14:05.918 "nvme_io": false, 00:14:05.918 "nvme_io_md": false, 00:14:05.918 "write_zeroes": true, 00:14:05.918 "zcopy": false, 00:14:05.918 "get_zone_info": false, 00:14:05.918 "zone_management": false, 00:14:05.918 "zone_append": false, 00:14:05.918 "compare": false, 00:14:05.918 "compare_and_write": false, 00:14:05.918 "abort": false, 00:14:05.918 "seek_hole": false, 00:14:05.918 "seek_data": false, 00:14:05.918 "copy": false, 00:14:05.918 "nvme_iov_md": false 00:14:05.918 }, 00:14:05.919 "memory_domains": [ 00:14:05.919 { 00:14:05.919 "dma_device_id": "system", 00:14:05.919 "dma_device_type": 1 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.919 "dma_device_type": 2 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "system", 00:14:05.919 "dma_device_type": 1 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.919 "dma_device_type": 2 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "system", 00:14:05.919 "dma_device_type": 1 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.919 "dma_device_type": 2 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "system", 00:14:05.919 "dma_device_type": 1 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.919 "dma_device_type": 2 00:14:05.919 } 00:14:05.919 ], 00:14:05.919 "driver_specific": { 00:14:05.919 "raid": { 00:14:05.919 "uuid": "d77e9069-14ac-49c2-8e98-6387d7e2f4ca", 00:14:05.919 "strip_size_kb": 64, 00:14:05.919 "state": "online", 00:14:05.919 "raid_level": "concat", 00:14:05.919 "superblock": true, 00:14:05.919 "num_base_bdevs": 4, 00:14:05.919 "num_base_bdevs_discovered": 4, 00:14:05.919 "num_base_bdevs_operational": 4, 00:14:05.919 "base_bdevs_list": [ 00:14:05.919 { 00:14:05.919 "name": "pt1", 00:14:05.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.919 "is_configured": true, 00:14:05.919 "data_offset": 2048, 00:14:05.919 "data_size": 63488 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "name": "pt2", 00:14:05.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.919 "is_configured": true, 00:14:05.919 "data_offset": 2048, 00:14:05.919 "data_size": 63488 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "name": "pt3", 00:14:05.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.919 "is_configured": true, 00:14:05.919 "data_offset": 2048, 00:14:05.919 "data_size": 63488 00:14:05.919 }, 00:14:05.919 { 00:14:05.919 "name": "pt4", 00:14:05.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.919 "is_configured": true, 00:14:05.919 "data_offset": 2048, 00:14:05.919 "data_size": 63488 00:14:05.919 } 00:14:05.919 ] 00:14:05.919 } 00:14:05.919 } 00:14:05.919 }' 00:14:05.919 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:06.177 pt2 00:14:06.177 pt3 00:14:06.177 pt4' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.177 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:06.178 22:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.178 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.178 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.178 22:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.178 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:06.437 [2024-09-27 22:31:02.071898] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d77e9069-14ac-49c2-8e98-6387d7e2f4ca '!=' d77e9069-14ac-49c2-8e98-6387d7e2f4ca ']' 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73375 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73375 ']' 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73375 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73375 00:14:06.437 killing process with pid 73375 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73375' 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73375 00:14:06.437 [2024-09-27 22:31:02.148642] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.437 22:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73375 00:14:06.437 [2024-09-27 22:31:02.148745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.437 [2024-09-27 22:31:02.148834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.437 [2024-09-27 22:31:02.148846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:07.004 [2024-09-27 22:31:02.602503] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.921 22:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:08.921 00:14:08.921 real 0m7.105s 00:14:08.921 user 0m9.501s 00:14:08.921 sys 0m1.200s 00:14:08.921 22:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.921 22:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.921 ************************************ 00:14:08.921 END TEST raid_superblock_test 00:14:08.921 ************************************ 00:14:09.203 22:31:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:09.203 22:31:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:09.203 22:31:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.203 22:31:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.203 ************************************ 00:14:09.203 START TEST raid_read_error_test 00:14:09.203 ************************************ 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gdevlc04Qw 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73651 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73651 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73651 ']' 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.203 22:31:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.203 [2024-09-27 22:31:04.970194] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:09.203 [2024-09-27 22:31:04.970576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73651 ] 00:14:09.462 [2024-09-27 22:31:05.146901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.720 [2024-09-27 22:31:05.400495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.978 [2024-09-27 22:31:05.658080] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.978 [2024-09-27 22:31:05.658119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 BaseBdev1_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 true 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 [2024-09-27 22:31:06.230259] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:10.545 [2024-09-27 22:31:06.230333] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.545 [2024-09-27 22:31:06.230357] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:10.545 [2024-09-27 22:31:06.230372] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.545 [2024-09-27 22:31:06.233160] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.545 [2024-09-27 22:31:06.233224] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.545 BaseBdev1 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 BaseBdev2_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 true 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 [2024-09-27 22:31:06.306939] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:10.545 [2024-09-27 22:31:06.307034] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.545 [2024-09-27 22:31:06.307058] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:10.545 [2024-09-27 22:31:06.307074] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.545 [2024-09-27 22:31:06.309739] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.545 [2024-09-27 22:31:06.309947] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.545 BaseBdev2 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 BaseBdev3_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 true 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.545 [2024-09-27 22:31:06.383063] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:10.545 [2024-09-27 22:31:06.383143] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.545 [2024-09-27 22:31:06.383169] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:10.545 [2024-09-27 22:31:06.383184] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.545 [2024-09-27 22:31:06.385912] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.545 [2024-09-27 22:31:06.386137] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.545 BaseBdev3 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.545 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 BaseBdev4_malloc 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 true 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 [2024-09-27 22:31:06.459764] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:10.804 [2024-09-27 22:31:06.459842] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.804 [2024-09-27 22:31:06.459869] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:10.804 [2024-09-27 22:31:06.459884] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.804 [2024-09-27 22:31:06.462546] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.804 [2024-09-27 22:31:06.462599] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:10.804 BaseBdev4 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 [2024-09-27 22:31:06.471829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.804 [2024-09-27 22:31:06.474184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.804 [2024-09-27 22:31:06.474273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.804 [2024-09-27 22:31:06.474341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.804 [2024-09-27 22:31:06.474625] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:10.804 [2024-09-27 22:31:06.474642] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:10.804 [2024-09-27 22:31:06.474945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:10.804 [2024-09-27 22:31:06.475154] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:10.804 [2024-09-27 22:31:06.475166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:10.804 [2024-09-27 22:31:06.475359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.804 "name": "raid_bdev1", 00:14:10.804 "uuid": "d6c3c9d9-1ed6-46d2-bdaa-38d220dbe6e6", 00:14:10.804 "strip_size_kb": 64, 00:14:10.804 "state": "online", 00:14:10.804 "raid_level": "concat", 00:14:10.804 "superblock": true, 00:14:10.804 "num_base_bdevs": 4, 00:14:10.804 "num_base_bdevs_discovered": 4, 00:14:10.804 "num_base_bdevs_operational": 4, 00:14:10.804 "base_bdevs_list": [ 00:14:10.804 { 00:14:10.804 "name": "BaseBdev1", 00:14:10.804 "uuid": "e27aab18-21f3-5ba3-833b-5f2a1fc55af3", 00:14:10.804 "is_configured": true, 00:14:10.804 "data_offset": 2048, 00:14:10.804 "data_size": 63488 00:14:10.804 }, 00:14:10.804 { 00:14:10.804 "name": "BaseBdev2", 00:14:10.804 "uuid": "ae2a489a-cb67-5f7d-b02d-7ebc5803020e", 00:14:10.804 "is_configured": true, 00:14:10.804 "data_offset": 2048, 00:14:10.804 "data_size": 63488 00:14:10.804 }, 00:14:10.804 { 00:14:10.804 "name": "BaseBdev3", 00:14:10.804 "uuid": "a41e15de-808e-5330-9813-af37503683c9", 00:14:10.804 "is_configured": true, 00:14:10.804 "data_offset": 2048, 00:14:10.804 "data_size": 63488 00:14:10.804 }, 00:14:10.804 { 00:14:10.804 "name": "BaseBdev4", 00:14:10.804 "uuid": "e1b03cc5-e9f0-5741-8aa2-c47d8eab6984", 00:14:10.804 "is_configured": true, 00:14:10.804 "data_offset": 2048, 00:14:10.804 "data_size": 63488 00:14:10.804 } 00:14:10.804 ] 00:14:10.804 }' 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.804 22:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.063 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:11.063 22:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:11.321 [2024-09-27 22:31:07.013282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.258 "name": "raid_bdev1", 00:14:12.258 "uuid": "d6c3c9d9-1ed6-46d2-bdaa-38d220dbe6e6", 00:14:12.258 "strip_size_kb": 64, 00:14:12.258 "state": "online", 00:14:12.258 "raid_level": "concat", 00:14:12.258 "superblock": true, 00:14:12.258 "num_base_bdevs": 4, 00:14:12.258 "num_base_bdevs_discovered": 4, 00:14:12.258 "num_base_bdevs_operational": 4, 00:14:12.258 "base_bdevs_list": [ 00:14:12.258 { 00:14:12.258 "name": "BaseBdev1", 00:14:12.258 "uuid": "e27aab18-21f3-5ba3-833b-5f2a1fc55af3", 00:14:12.258 "is_configured": true, 00:14:12.258 "data_offset": 2048, 00:14:12.258 "data_size": 63488 00:14:12.258 }, 00:14:12.258 { 00:14:12.258 "name": "BaseBdev2", 00:14:12.258 "uuid": "ae2a489a-cb67-5f7d-b02d-7ebc5803020e", 00:14:12.258 "is_configured": true, 00:14:12.258 "data_offset": 2048, 00:14:12.258 "data_size": 63488 00:14:12.258 }, 00:14:12.258 { 00:14:12.258 "name": "BaseBdev3", 00:14:12.258 "uuid": "a41e15de-808e-5330-9813-af37503683c9", 00:14:12.258 "is_configured": true, 00:14:12.258 "data_offset": 2048, 00:14:12.258 "data_size": 63488 00:14:12.258 }, 00:14:12.258 { 00:14:12.258 "name": "BaseBdev4", 00:14:12.258 "uuid": "e1b03cc5-e9f0-5741-8aa2-c47d8eab6984", 00:14:12.258 "is_configured": true, 00:14:12.258 "data_offset": 2048, 00:14:12.258 "data_size": 63488 00:14:12.258 } 00:14:12.258 ] 00:14:12.258 }' 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.258 22:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.516 22:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.516 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.516 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.516 [2024-09-27 22:31:08.362277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.516 [2024-09-27 22:31:08.362323] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.516 { 00:14:12.516 "results": [ 00:14:12.516 { 00:14:12.516 "job": "raid_bdev1", 00:14:12.517 "core_mask": "0x1", 00:14:12.517 "workload": "randrw", 00:14:12.517 "percentage": 50, 00:14:12.517 "status": "finished", 00:14:12.517 "queue_depth": 1, 00:14:12.517 "io_size": 131072, 00:14:12.517 "runtime": 1.348816, 00:14:12.517 "iops": 14443.03744914058, 00:14:12.517 "mibps": 1805.3796811425725, 00:14:12.517 "io_failed": 1, 00:14:12.517 "io_timeout": 0, 00:14:12.517 "avg_latency_us": 95.58405035808978, 00:14:12.517 "min_latency_us": 28.170281124497993, 00:14:12.517 "max_latency_us": 1559.4409638554216 00:14:12.517 } 00:14:12.517 ], 00:14:12.517 "core_count": 1 00:14:12.517 } 00:14:12.517 [2024-09-27 22:31:08.365171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.517 [2024-09-27 22:31:08.365259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.517 [2024-09-27 22:31:08.365305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.517 [2024-09-27 22:31:08.365320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73651 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73651 ']' 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73651 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.517 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73651 00:14:12.777 killing process with pid 73651 00:14:12.777 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.777 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.777 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73651' 00:14:12.777 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73651 00:14:12.777 [2024-09-27 22:31:08.419983] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.777 22:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73651 00:14:13.035 [2024-09-27 22:31:08.774890] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.568 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:15.568 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:15.568 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gdevlc04Qw 00:14:15.568 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:15.568 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:15.569 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:15.569 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:15.569 22:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:15.569 00:14:15.569 real 0m6.163s 00:14:15.569 user 0m6.985s 00:14:15.569 sys 0m0.751s 00:14:15.569 ************************************ 00:14:15.569 END TEST raid_read_error_test 00:14:15.569 ************************************ 00:14:15.569 22:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.569 22:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.569 22:31:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:15.569 22:31:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:15.569 22:31:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.569 22:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.569 ************************************ 00:14:15.569 START TEST raid_write_error_test 00:14:15.569 ************************************ 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nidLhl7sKr 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73813 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73813 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73813 ']' 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.569 22:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.569 [2024-09-27 22:31:11.195039] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:15.569 [2024-09-27 22:31:11.195180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73813 ] 00:14:15.569 [2024-09-27 22:31:11.369118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.827 [2024-09-27 22:31:11.620874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.086 [2024-09-27 22:31:11.879729] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.086 [2024-09-27 22:31:11.879772] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.655 BaseBdev1_malloc 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.655 true 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.655 [2024-09-27 22:31:12.457676] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:16.655 [2024-09-27 22:31:12.457998] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.655 [2024-09-27 22:31:12.458037] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:16.655 [2024-09-27 22:31:12.458054] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.655 [2024-09-27 22:31:12.460835] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.655 [2024-09-27 22:31:12.460896] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.655 BaseBdev1 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.655 BaseBdev2_malloc 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.655 true 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.655 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 [2024-09-27 22:31:12.533975] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:16.915 [2024-09-27 22:31:12.534068] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.915 [2024-09-27 22:31:12.534097] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:16.915 [2024-09-27 22:31:12.534113] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.915 [2024-09-27 22:31:12.536805] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.915 [2024-09-27 22:31:12.537048] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:16.915 BaseBdev2 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 BaseBdev3_malloc 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 true 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 [2024-09-27 22:31:12.609896] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:16.915 [2024-09-27 22:31:12.609989] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.915 [2024-09-27 22:31:12.610017] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:16.915 [2024-09-27 22:31:12.610033] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.915 [2024-09-27 22:31:12.612739] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.915 [2024-09-27 22:31:12.612950] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:16.915 BaseBdev3 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 BaseBdev4_malloc 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 true 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 [2024-09-27 22:31:12.687539] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:16.915 [2024-09-27 22:31:12.687810] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.915 [2024-09-27 22:31:12.687848] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:16.915 [2024-09-27 22:31:12.687864] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.915 [2024-09-27 22:31:12.690616] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.915 [2024-09-27 22:31:12.690680] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:16.915 BaseBdev4 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 [2024-09-27 22:31:12.699722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.915 [2024-09-27 22:31:12.702139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.915 [2024-09-27 22:31:12.702405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.915 [2024-09-27 22:31:12.702489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.915 [2024-09-27 22:31:12.702758] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:16.915 [2024-09-27 22:31:12.702775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:16.915 [2024-09-27 22:31:12.703109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.915 [2024-09-27 22:31:12.703306] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:16.915 [2024-09-27 22:31:12.703317] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:16.915 [2024-09-27 22:31:12.703580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.915 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.915 "name": "raid_bdev1", 00:14:16.915 "uuid": "3bf196ab-f900-4c55-a084-b887174bd6a0", 00:14:16.915 "strip_size_kb": 64, 00:14:16.915 "state": "online", 00:14:16.915 "raid_level": "concat", 00:14:16.915 "superblock": true, 00:14:16.915 "num_base_bdevs": 4, 00:14:16.915 "num_base_bdevs_discovered": 4, 00:14:16.915 "num_base_bdevs_operational": 4, 00:14:16.915 "base_bdevs_list": [ 00:14:16.915 { 00:14:16.915 "name": "BaseBdev1", 00:14:16.915 "uuid": "915a8bbf-4200-5531-b32f-3657830bb388", 00:14:16.915 "is_configured": true, 00:14:16.915 "data_offset": 2048, 00:14:16.915 "data_size": 63488 00:14:16.915 }, 00:14:16.915 { 00:14:16.915 "name": "BaseBdev2", 00:14:16.916 "uuid": "399efeb9-3ca0-5b8b-bc76-113797c11baa", 00:14:16.916 "is_configured": true, 00:14:16.916 "data_offset": 2048, 00:14:16.916 "data_size": 63488 00:14:16.916 }, 00:14:16.916 { 00:14:16.916 "name": "BaseBdev3", 00:14:16.916 "uuid": "fe57c03c-98db-55cb-9022-17f7c846b13c", 00:14:16.916 "is_configured": true, 00:14:16.916 "data_offset": 2048, 00:14:16.916 "data_size": 63488 00:14:16.916 }, 00:14:16.916 { 00:14:16.916 "name": "BaseBdev4", 00:14:16.916 "uuid": "158e4b4b-4269-57ed-ae85-9eb49fce3822", 00:14:16.916 "is_configured": true, 00:14:16.916 "data_offset": 2048, 00:14:16.916 "data_size": 63488 00:14:16.916 } 00:14:16.916 ] 00:14:16.916 }' 00:14:16.916 22:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.916 22:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.484 22:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:17.484 22:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:17.484 [2024-09-27 22:31:13.257360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.419 "name": "raid_bdev1", 00:14:18.419 "uuid": "3bf196ab-f900-4c55-a084-b887174bd6a0", 00:14:18.419 "strip_size_kb": 64, 00:14:18.419 "state": "online", 00:14:18.419 "raid_level": "concat", 00:14:18.419 "superblock": true, 00:14:18.419 "num_base_bdevs": 4, 00:14:18.419 "num_base_bdevs_discovered": 4, 00:14:18.419 "num_base_bdevs_operational": 4, 00:14:18.419 "base_bdevs_list": [ 00:14:18.419 { 00:14:18.419 "name": "BaseBdev1", 00:14:18.419 "uuid": "915a8bbf-4200-5531-b32f-3657830bb388", 00:14:18.419 "is_configured": true, 00:14:18.419 "data_offset": 2048, 00:14:18.419 "data_size": 63488 00:14:18.419 }, 00:14:18.419 { 00:14:18.419 "name": "BaseBdev2", 00:14:18.419 "uuid": "399efeb9-3ca0-5b8b-bc76-113797c11baa", 00:14:18.419 "is_configured": true, 00:14:18.419 "data_offset": 2048, 00:14:18.419 "data_size": 63488 00:14:18.419 }, 00:14:18.419 { 00:14:18.419 "name": "BaseBdev3", 00:14:18.419 "uuid": "fe57c03c-98db-55cb-9022-17f7c846b13c", 00:14:18.419 "is_configured": true, 00:14:18.419 "data_offset": 2048, 00:14:18.419 "data_size": 63488 00:14:18.419 }, 00:14:18.419 { 00:14:18.419 "name": "BaseBdev4", 00:14:18.419 "uuid": "158e4b4b-4269-57ed-ae85-9eb49fce3822", 00:14:18.419 "is_configured": true, 00:14:18.419 "data_offset": 2048, 00:14:18.419 "data_size": 63488 00:14:18.419 } 00:14:18.419 ] 00:14:18.419 }' 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.419 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.677 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:18.677 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.677 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.677 [2024-09-27 22:31:14.550468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.677 [2024-09-27 22:31:14.550509] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.677 [2024-09-27 22:31:14.553349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.677 [2024-09-27 22:31:14.553420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.677 [2024-09-27 22:31:14.553467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.677 [2024-09-27 22:31:14.553482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:18.935 { 00:14:18.935 "results": [ 00:14:18.935 { 00:14:18.935 "job": "raid_bdev1", 00:14:18.935 "core_mask": "0x1", 00:14:18.935 "workload": "randrw", 00:14:18.935 "percentage": 50, 00:14:18.935 "status": "finished", 00:14:18.935 "queue_depth": 1, 00:14:18.935 "io_size": 131072, 00:14:18.935 "runtime": 1.292465, 00:14:18.935 "iops": 14320.69727226656, 00:14:18.935 "mibps": 1790.08715903332, 00:14:18.935 "io_failed": 1, 00:14:18.935 "io_timeout": 0, 00:14:18.935 "avg_latency_us": 96.36854582023393, 00:14:18.935 "min_latency_us": 30.226506024096384, 00:14:18.935 "max_latency_us": 1559.4409638554216 00:14:18.935 } 00:14:18.935 ], 00:14:18.935 "core_count": 1 00:14:18.935 } 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73813 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73813 ']' 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73813 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73813 00:14:18.935 killing process with pid 73813 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73813' 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73813 00:14:18.935 22:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73813 00:14:18.935 [2024-09-27 22:31:14.605942] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.194 [2024-09-27 22:31:14.963547] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nidLhl7sKr 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:14:21.729 ************************************ 00:14:21.729 END TEST raid_write_error_test 00:14:21.729 ************************************ 00:14:21.729 00:14:21.729 real 0m6.108s 00:14:21.729 user 0m6.882s 00:14:21.729 sys 0m0.717s 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.729 22:31:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.729 22:31:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:21.729 22:31:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:21.729 22:31:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:21.729 22:31:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.729 22:31:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.729 ************************************ 00:14:21.729 START TEST raid_state_function_test 00:14:21.729 ************************************ 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:21.729 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73968 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:21.730 Process raid pid: 73968 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73968' 00:14:21.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73968 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73968 ']' 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.730 22:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.730 [2024-09-27 22:31:17.372089] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:21.730 [2024-09-27 22:31:17.372436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.730 [2024-09-27 22:31:17.548908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.003 [2024-09-27 22:31:17.810428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.265 [2024-09-27 22:31:18.074263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.265 [2024-09-27 22:31:18.074312] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.915 [2024-09-27 22:31:18.592460] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.915 [2024-09-27 22:31:18.592531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.915 [2024-09-27 22:31:18.592544] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.915 [2024-09-27 22:31:18.592558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.915 [2024-09-27 22:31:18.592567] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.915 [2024-09-27 22:31:18.592582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.915 [2024-09-27 22:31:18.592590] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.915 [2024-09-27 22:31:18.592603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.915 "name": "Existed_Raid", 00:14:22.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.915 "strip_size_kb": 0, 00:14:22.915 "state": "configuring", 00:14:22.915 "raid_level": "raid1", 00:14:22.915 "superblock": false, 00:14:22.915 "num_base_bdevs": 4, 00:14:22.915 "num_base_bdevs_discovered": 0, 00:14:22.915 "num_base_bdevs_operational": 4, 00:14:22.915 "base_bdevs_list": [ 00:14:22.915 { 00:14:22.915 "name": "BaseBdev1", 00:14:22.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.915 "is_configured": false, 00:14:22.915 "data_offset": 0, 00:14:22.915 "data_size": 0 00:14:22.915 }, 00:14:22.915 { 00:14:22.915 "name": "BaseBdev2", 00:14:22.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.915 "is_configured": false, 00:14:22.915 "data_offset": 0, 00:14:22.915 "data_size": 0 00:14:22.915 }, 00:14:22.915 { 00:14:22.915 "name": "BaseBdev3", 00:14:22.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.915 "is_configured": false, 00:14:22.915 "data_offset": 0, 00:14:22.915 "data_size": 0 00:14:22.915 }, 00:14:22.915 { 00:14:22.915 "name": "BaseBdev4", 00:14:22.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.915 "is_configured": false, 00:14:22.915 "data_offset": 0, 00:14:22.915 "data_size": 0 00:14:22.915 } 00:14:22.915 ] 00:14:22.915 }' 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.915 22:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 [2024-09-27 22:31:19.063706] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.484 [2024-09-27 22:31:19.063765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 [2024-09-27 22:31:19.075711] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.484 [2024-09-27 22:31:19.075776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.484 [2024-09-27 22:31:19.075787] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.484 [2024-09-27 22:31:19.075801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.484 [2024-09-27 22:31:19.075809] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.484 [2024-09-27 22:31:19.075822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.484 [2024-09-27 22:31:19.075831] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:23.484 [2024-09-27 22:31:19.075844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 [2024-09-27 22:31:19.134154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.484 BaseBdev1 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 [ 00:14:23.484 { 00:14:23.484 "name": "BaseBdev1", 00:14:23.484 "aliases": [ 00:14:23.484 "d9015a49-5ac2-4c8c-b64b-86fb24325efd" 00:14:23.484 ], 00:14:23.484 "product_name": "Malloc disk", 00:14:23.484 "block_size": 512, 00:14:23.484 "num_blocks": 65536, 00:14:23.484 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:23.484 "assigned_rate_limits": { 00:14:23.484 "rw_ios_per_sec": 0, 00:14:23.484 "rw_mbytes_per_sec": 0, 00:14:23.484 "r_mbytes_per_sec": 0, 00:14:23.484 "w_mbytes_per_sec": 0 00:14:23.484 }, 00:14:23.484 "claimed": true, 00:14:23.484 "claim_type": "exclusive_write", 00:14:23.484 "zoned": false, 00:14:23.484 "supported_io_types": { 00:14:23.484 "read": true, 00:14:23.484 "write": true, 00:14:23.484 "unmap": true, 00:14:23.484 "flush": true, 00:14:23.484 "reset": true, 00:14:23.484 "nvme_admin": false, 00:14:23.484 "nvme_io": false, 00:14:23.484 "nvme_io_md": false, 00:14:23.484 "write_zeroes": true, 00:14:23.484 "zcopy": true, 00:14:23.484 "get_zone_info": false, 00:14:23.484 "zone_management": false, 00:14:23.484 "zone_append": false, 00:14:23.484 "compare": false, 00:14:23.484 "compare_and_write": false, 00:14:23.484 "abort": true, 00:14:23.484 "seek_hole": false, 00:14:23.484 "seek_data": false, 00:14:23.484 "copy": true, 00:14:23.484 "nvme_iov_md": false 00:14:23.484 }, 00:14:23.484 "memory_domains": [ 00:14:23.484 { 00:14:23.484 "dma_device_id": "system", 00:14:23.484 "dma_device_type": 1 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.484 "dma_device_type": 2 00:14:23.484 } 00:14:23.484 ], 00:14:23.484 "driver_specific": {} 00:14:23.484 } 00:14:23.484 ] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.484 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.484 "name": "Existed_Raid", 00:14:23.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.484 "strip_size_kb": 0, 00:14:23.484 "state": "configuring", 00:14:23.484 "raid_level": "raid1", 00:14:23.484 "superblock": false, 00:14:23.484 "num_base_bdevs": 4, 00:14:23.484 "num_base_bdevs_discovered": 1, 00:14:23.484 "num_base_bdevs_operational": 4, 00:14:23.484 "base_bdevs_list": [ 00:14:23.484 { 00:14:23.484 "name": "BaseBdev1", 00:14:23.484 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:23.484 "is_configured": true, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 65536 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev2", 00:14:23.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.484 "is_configured": false, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 0 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev3", 00:14:23.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.484 "is_configured": false, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 0 00:14:23.484 }, 00:14:23.484 { 00:14:23.484 "name": "BaseBdev4", 00:14:23.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.484 "is_configured": false, 00:14:23.484 "data_offset": 0, 00:14:23.484 "data_size": 0 00:14:23.484 } 00:14:23.484 ] 00:14:23.484 }' 00:14:23.485 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.485 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.053 [2024-09-27 22:31:19.629589] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.053 [2024-09-27 22:31:19.629814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.053 [2024-09-27 22:31:19.641650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.053 [2024-09-27 22:31:19.644124] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.053 [2024-09-27 22:31:19.644306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.053 [2024-09-27 22:31:19.644399] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:24.053 [2024-09-27 22:31:19.644449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:24.053 [2024-09-27 22:31:19.644535] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:24.053 [2024-09-27 22:31:19.644579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.053 "name": "Existed_Raid", 00:14:24.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.053 "strip_size_kb": 0, 00:14:24.053 "state": "configuring", 00:14:24.053 "raid_level": "raid1", 00:14:24.053 "superblock": false, 00:14:24.053 "num_base_bdevs": 4, 00:14:24.053 "num_base_bdevs_discovered": 1, 00:14:24.053 "num_base_bdevs_operational": 4, 00:14:24.053 "base_bdevs_list": [ 00:14:24.053 { 00:14:24.053 "name": "BaseBdev1", 00:14:24.053 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:24.053 "is_configured": true, 00:14:24.053 "data_offset": 0, 00:14:24.053 "data_size": 65536 00:14:24.053 }, 00:14:24.053 { 00:14:24.053 "name": "BaseBdev2", 00:14:24.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.053 "is_configured": false, 00:14:24.053 "data_offset": 0, 00:14:24.053 "data_size": 0 00:14:24.053 }, 00:14:24.053 { 00:14:24.053 "name": "BaseBdev3", 00:14:24.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.053 "is_configured": false, 00:14:24.053 "data_offset": 0, 00:14:24.053 "data_size": 0 00:14:24.053 }, 00:14:24.053 { 00:14:24.053 "name": "BaseBdev4", 00:14:24.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.053 "is_configured": false, 00:14:24.053 "data_offset": 0, 00:14:24.053 "data_size": 0 00:14:24.053 } 00:14:24.053 ] 00:14:24.053 }' 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.053 22:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 [2024-09-27 22:31:20.112391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.312 BaseBdev2 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.312 [ 00:14:24.312 { 00:14:24.312 "name": "BaseBdev2", 00:14:24.312 "aliases": [ 00:14:24.312 "d89d014c-5a51-4be1-a34d-ef1fc079def6" 00:14:24.312 ], 00:14:24.312 "product_name": "Malloc disk", 00:14:24.312 "block_size": 512, 00:14:24.312 "num_blocks": 65536, 00:14:24.312 "uuid": "d89d014c-5a51-4be1-a34d-ef1fc079def6", 00:14:24.312 "assigned_rate_limits": { 00:14:24.312 "rw_ios_per_sec": 0, 00:14:24.312 "rw_mbytes_per_sec": 0, 00:14:24.312 "r_mbytes_per_sec": 0, 00:14:24.312 "w_mbytes_per_sec": 0 00:14:24.312 }, 00:14:24.312 "claimed": true, 00:14:24.312 "claim_type": "exclusive_write", 00:14:24.312 "zoned": false, 00:14:24.312 "supported_io_types": { 00:14:24.312 "read": true, 00:14:24.312 "write": true, 00:14:24.312 "unmap": true, 00:14:24.312 "flush": true, 00:14:24.312 "reset": true, 00:14:24.312 "nvme_admin": false, 00:14:24.312 "nvme_io": false, 00:14:24.312 "nvme_io_md": false, 00:14:24.312 "write_zeroes": true, 00:14:24.312 "zcopy": true, 00:14:24.312 "get_zone_info": false, 00:14:24.312 "zone_management": false, 00:14:24.312 "zone_append": false, 00:14:24.312 "compare": false, 00:14:24.312 "compare_and_write": false, 00:14:24.312 "abort": true, 00:14:24.312 "seek_hole": false, 00:14:24.312 "seek_data": false, 00:14:24.312 "copy": true, 00:14:24.312 "nvme_iov_md": false 00:14:24.312 }, 00:14:24.312 "memory_domains": [ 00:14:24.312 { 00:14:24.312 "dma_device_id": "system", 00:14:24.312 "dma_device_type": 1 00:14:24.312 }, 00:14:24.312 { 00:14:24.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.312 "dma_device_type": 2 00:14:24.312 } 00:14:24.312 ], 00:14:24.312 "driver_specific": {} 00:14:24.312 } 00:14:24.312 ] 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.312 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.570 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.570 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.570 "name": "Existed_Raid", 00:14:24.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.570 "strip_size_kb": 0, 00:14:24.570 "state": "configuring", 00:14:24.570 "raid_level": "raid1", 00:14:24.570 "superblock": false, 00:14:24.570 "num_base_bdevs": 4, 00:14:24.570 "num_base_bdevs_discovered": 2, 00:14:24.570 "num_base_bdevs_operational": 4, 00:14:24.570 "base_bdevs_list": [ 00:14:24.570 { 00:14:24.570 "name": "BaseBdev1", 00:14:24.570 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:24.570 "is_configured": true, 00:14:24.570 "data_offset": 0, 00:14:24.571 "data_size": 65536 00:14:24.571 }, 00:14:24.571 { 00:14:24.571 "name": "BaseBdev2", 00:14:24.571 "uuid": "d89d014c-5a51-4be1-a34d-ef1fc079def6", 00:14:24.571 "is_configured": true, 00:14:24.571 "data_offset": 0, 00:14:24.571 "data_size": 65536 00:14:24.571 }, 00:14:24.571 { 00:14:24.571 "name": "BaseBdev3", 00:14:24.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.571 "is_configured": false, 00:14:24.571 "data_offset": 0, 00:14:24.571 "data_size": 0 00:14:24.571 }, 00:14:24.571 { 00:14:24.571 "name": "BaseBdev4", 00:14:24.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.571 "is_configured": false, 00:14:24.571 "data_offset": 0, 00:14:24.571 "data_size": 0 00:14:24.571 } 00:14:24.571 ] 00:14:24.571 }' 00:14:24.571 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.571 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.829 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:24.829 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.829 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.829 [2024-09-27 22:31:20.644047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.829 BaseBdev3 00:14:24.829 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.829 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:24.829 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.830 [ 00:14:24.830 { 00:14:24.830 "name": "BaseBdev3", 00:14:24.830 "aliases": [ 00:14:24.830 "253c8e54-dc69-47cb-9b2e-28e91dbbf959" 00:14:24.830 ], 00:14:24.830 "product_name": "Malloc disk", 00:14:24.830 "block_size": 512, 00:14:24.830 "num_blocks": 65536, 00:14:24.830 "uuid": "253c8e54-dc69-47cb-9b2e-28e91dbbf959", 00:14:24.830 "assigned_rate_limits": { 00:14:24.830 "rw_ios_per_sec": 0, 00:14:24.830 "rw_mbytes_per_sec": 0, 00:14:24.830 "r_mbytes_per_sec": 0, 00:14:24.830 "w_mbytes_per_sec": 0 00:14:24.830 }, 00:14:24.830 "claimed": true, 00:14:24.830 "claim_type": "exclusive_write", 00:14:24.830 "zoned": false, 00:14:24.830 "supported_io_types": { 00:14:24.830 "read": true, 00:14:24.830 "write": true, 00:14:24.830 "unmap": true, 00:14:24.830 "flush": true, 00:14:24.830 "reset": true, 00:14:24.830 "nvme_admin": false, 00:14:24.830 "nvme_io": false, 00:14:24.830 "nvme_io_md": false, 00:14:24.830 "write_zeroes": true, 00:14:24.830 "zcopy": true, 00:14:24.830 "get_zone_info": false, 00:14:24.830 "zone_management": false, 00:14:24.830 "zone_append": false, 00:14:24.830 "compare": false, 00:14:24.830 "compare_and_write": false, 00:14:24.830 "abort": true, 00:14:24.830 "seek_hole": false, 00:14:24.830 "seek_data": false, 00:14:24.830 "copy": true, 00:14:24.830 "nvme_iov_md": false 00:14:24.830 }, 00:14:24.830 "memory_domains": [ 00:14:24.830 { 00:14:24.830 "dma_device_id": "system", 00:14:24.830 "dma_device_type": 1 00:14:24.830 }, 00:14:24.830 { 00:14:24.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.830 "dma_device_type": 2 00:14:24.830 } 00:14:24.830 ], 00:14:24.830 "driver_specific": {} 00:14:24.830 } 00:14:24.830 ] 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.830 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.089 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.089 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.089 "name": "Existed_Raid", 00:14:25.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.089 "strip_size_kb": 0, 00:14:25.089 "state": "configuring", 00:14:25.089 "raid_level": "raid1", 00:14:25.089 "superblock": false, 00:14:25.089 "num_base_bdevs": 4, 00:14:25.089 "num_base_bdevs_discovered": 3, 00:14:25.089 "num_base_bdevs_operational": 4, 00:14:25.089 "base_bdevs_list": [ 00:14:25.089 { 00:14:25.089 "name": "BaseBdev1", 00:14:25.089 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:25.089 "is_configured": true, 00:14:25.089 "data_offset": 0, 00:14:25.089 "data_size": 65536 00:14:25.089 }, 00:14:25.089 { 00:14:25.089 "name": "BaseBdev2", 00:14:25.089 "uuid": "d89d014c-5a51-4be1-a34d-ef1fc079def6", 00:14:25.089 "is_configured": true, 00:14:25.089 "data_offset": 0, 00:14:25.089 "data_size": 65536 00:14:25.089 }, 00:14:25.089 { 00:14:25.089 "name": "BaseBdev3", 00:14:25.089 "uuid": "253c8e54-dc69-47cb-9b2e-28e91dbbf959", 00:14:25.089 "is_configured": true, 00:14:25.089 "data_offset": 0, 00:14:25.089 "data_size": 65536 00:14:25.089 }, 00:14:25.089 { 00:14:25.089 "name": "BaseBdev4", 00:14:25.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.089 "is_configured": false, 00:14:25.089 "data_offset": 0, 00:14:25.089 "data_size": 0 00:14:25.089 } 00:14:25.089 ] 00:14:25.089 }' 00:14:25.089 22:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.089 22:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 [2024-09-27 22:31:21.187956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.408 [2024-09-27 22:31:21.188329] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:25.408 [2024-09-27 22:31:21.188356] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:25.408 [2024-09-27 22:31:21.188713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:25.408 [2024-09-27 22:31:21.188914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:25.408 [2024-09-27 22:31:21.188930] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:25.408 [2024-09-27 22:31:21.189254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.408 BaseBdev4 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.408 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.408 [ 00:14:25.408 { 00:14:25.408 "name": "BaseBdev4", 00:14:25.409 "aliases": [ 00:14:25.409 "ae402769-6ff4-4539-abba-c18d2f16768c" 00:14:25.409 ], 00:14:25.409 "product_name": "Malloc disk", 00:14:25.409 "block_size": 512, 00:14:25.409 "num_blocks": 65536, 00:14:25.409 "uuid": "ae402769-6ff4-4539-abba-c18d2f16768c", 00:14:25.409 "assigned_rate_limits": { 00:14:25.409 "rw_ios_per_sec": 0, 00:14:25.409 "rw_mbytes_per_sec": 0, 00:14:25.409 "r_mbytes_per_sec": 0, 00:14:25.409 "w_mbytes_per_sec": 0 00:14:25.409 }, 00:14:25.409 "claimed": true, 00:14:25.409 "claim_type": "exclusive_write", 00:14:25.409 "zoned": false, 00:14:25.409 "supported_io_types": { 00:14:25.409 "read": true, 00:14:25.409 "write": true, 00:14:25.409 "unmap": true, 00:14:25.409 "flush": true, 00:14:25.409 "reset": true, 00:14:25.409 "nvme_admin": false, 00:14:25.409 "nvme_io": false, 00:14:25.409 "nvme_io_md": false, 00:14:25.409 "write_zeroes": true, 00:14:25.409 "zcopy": true, 00:14:25.409 "get_zone_info": false, 00:14:25.409 "zone_management": false, 00:14:25.409 "zone_append": false, 00:14:25.409 "compare": false, 00:14:25.409 "compare_and_write": false, 00:14:25.409 "abort": true, 00:14:25.409 "seek_hole": false, 00:14:25.409 "seek_data": false, 00:14:25.409 "copy": true, 00:14:25.409 "nvme_iov_md": false 00:14:25.409 }, 00:14:25.409 "memory_domains": [ 00:14:25.409 { 00:14:25.409 "dma_device_id": "system", 00:14:25.409 "dma_device_type": 1 00:14:25.409 }, 00:14:25.409 { 00:14:25.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.409 "dma_device_type": 2 00:14:25.409 } 00:14:25.409 ], 00:14:25.409 "driver_specific": {} 00:14:25.409 } 00:14:25.409 ] 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.409 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.667 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.667 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.667 "name": "Existed_Raid", 00:14:25.667 "uuid": "eb75abb2-c7aa-4ca1-a99a-4e766054b453", 00:14:25.667 "strip_size_kb": 0, 00:14:25.667 "state": "online", 00:14:25.667 "raid_level": "raid1", 00:14:25.667 "superblock": false, 00:14:25.667 "num_base_bdevs": 4, 00:14:25.667 "num_base_bdevs_discovered": 4, 00:14:25.667 "num_base_bdevs_operational": 4, 00:14:25.668 "base_bdevs_list": [ 00:14:25.668 { 00:14:25.668 "name": "BaseBdev1", 00:14:25.668 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:25.668 "is_configured": true, 00:14:25.668 "data_offset": 0, 00:14:25.668 "data_size": 65536 00:14:25.668 }, 00:14:25.668 { 00:14:25.668 "name": "BaseBdev2", 00:14:25.668 "uuid": "d89d014c-5a51-4be1-a34d-ef1fc079def6", 00:14:25.668 "is_configured": true, 00:14:25.668 "data_offset": 0, 00:14:25.668 "data_size": 65536 00:14:25.668 }, 00:14:25.668 { 00:14:25.668 "name": "BaseBdev3", 00:14:25.668 "uuid": "253c8e54-dc69-47cb-9b2e-28e91dbbf959", 00:14:25.668 "is_configured": true, 00:14:25.668 "data_offset": 0, 00:14:25.668 "data_size": 65536 00:14:25.668 }, 00:14:25.668 { 00:14:25.668 "name": "BaseBdev4", 00:14:25.668 "uuid": "ae402769-6ff4-4539-abba-c18d2f16768c", 00:14:25.668 "is_configured": true, 00:14:25.668 "data_offset": 0, 00:14:25.668 "data_size": 65536 00:14:25.668 } 00:14:25.668 ] 00:14:25.668 }' 00:14:25.668 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.668 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.927 [2024-09-27 22:31:21.703955] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.927 "name": "Existed_Raid", 00:14:25.927 "aliases": [ 00:14:25.927 "eb75abb2-c7aa-4ca1-a99a-4e766054b453" 00:14:25.927 ], 00:14:25.927 "product_name": "Raid Volume", 00:14:25.927 "block_size": 512, 00:14:25.927 "num_blocks": 65536, 00:14:25.927 "uuid": "eb75abb2-c7aa-4ca1-a99a-4e766054b453", 00:14:25.927 "assigned_rate_limits": { 00:14:25.927 "rw_ios_per_sec": 0, 00:14:25.927 "rw_mbytes_per_sec": 0, 00:14:25.927 "r_mbytes_per_sec": 0, 00:14:25.927 "w_mbytes_per_sec": 0 00:14:25.927 }, 00:14:25.927 "claimed": false, 00:14:25.927 "zoned": false, 00:14:25.927 "supported_io_types": { 00:14:25.927 "read": true, 00:14:25.927 "write": true, 00:14:25.927 "unmap": false, 00:14:25.927 "flush": false, 00:14:25.927 "reset": true, 00:14:25.927 "nvme_admin": false, 00:14:25.927 "nvme_io": false, 00:14:25.927 "nvme_io_md": false, 00:14:25.927 "write_zeroes": true, 00:14:25.927 "zcopy": false, 00:14:25.927 "get_zone_info": false, 00:14:25.927 "zone_management": false, 00:14:25.927 "zone_append": false, 00:14:25.927 "compare": false, 00:14:25.927 "compare_and_write": false, 00:14:25.927 "abort": false, 00:14:25.927 "seek_hole": false, 00:14:25.927 "seek_data": false, 00:14:25.927 "copy": false, 00:14:25.927 "nvme_iov_md": false 00:14:25.927 }, 00:14:25.927 "memory_domains": [ 00:14:25.927 { 00:14:25.927 "dma_device_id": "system", 00:14:25.927 "dma_device_type": 1 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.927 "dma_device_type": 2 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "system", 00:14:25.927 "dma_device_type": 1 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.927 "dma_device_type": 2 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "system", 00:14:25.927 "dma_device_type": 1 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.927 "dma_device_type": 2 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "system", 00:14:25.927 "dma_device_type": 1 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.927 "dma_device_type": 2 00:14:25.927 } 00:14:25.927 ], 00:14:25.927 "driver_specific": { 00:14:25.927 "raid": { 00:14:25.927 "uuid": "eb75abb2-c7aa-4ca1-a99a-4e766054b453", 00:14:25.927 "strip_size_kb": 0, 00:14:25.927 "state": "online", 00:14:25.927 "raid_level": "raid1", 00:14:25.927 "superblock": false, 00:14:25.927 "num_base_bdevs": 4, 00:14:25.927 "num_base_bdevs_discovered": 4, 00:14:25.927 "num_base_bdevs_operational": 4, 00:14:25.927 "base_bdevs_list": [ 00:14:25.927 { 00:14:25.927 "name": "BaseBdev1", 00:14:25.927 "uuid": "d9015a49-5ac2-4c8c-b64b-86fb24325efd", 00:14:25.927 "is_configured": true, 00:14:25.927 "data_offset": 0, 00:14:25.927 "data_size": 65536 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "name": "BaseBdev2", 00:14:25.927 "uuid": "d89d014c-5a51-4be1-a34d-ef1fc079def6", 00:14:25.927 "is_configured": true, 00:14:25.927 "data_offset": 0, 00:14:25.927 "data_size": 65536 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "name": "BaseBdev3", 00:14:25.927 "uuid": "253c8e54-dc69-47cb-9b2e-28e91dbbf959", 00:14:25.927 "is_configured": true, 00:14:25.927 "data_offset": 0, 00:14:25.927 "data_size": 65536 00:14:25.927 }, 00:14:25.927 { 00:14:25.927 "name": "BaseBdev4", 00:14:25.927 "uuid": "ae402769-6ff4-4539-abba-c18d2f16768c", 00:14:25.927 "is_configured": true, 00:14:25.927 "data_offset": 0, 00:14:25.927 "data_size": 65536 00:14:25.927 } 00:14:25.927 ] 00:14:25.927 } 00:14:25.927 } 00:14:25.927 }' 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:25.927 BaseBdev2 00:14:25.927 BaseBdev3 00:14:25.927 BaseBdev4' 00:14:25.927 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.186 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.187 22:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.187 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.187 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.187 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.187 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:26.187 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.187 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.187 [2024-09-27 22:31:22.051691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.446 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.447 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.447 "name": "Existed_Raid", 00:14:26.447 "uuid": "eb75abb2-c7aa-4ca1-a99a-4e766054b453", 00:14:26.447 "strip_size_kb": 0, 00:14:26.447 "state": "online", 00:14:26.447 "raid_level": "raid1", 00:14:26.447 "superblock": false, 00:14:26.447 "num_base_bdevs": 4, 00:14:26.447 "num_base_bdevs_discovered": 3, 00:14:26.447 "num_base_bdevs_operational": 3, 00:14:26.447 "base_bdevs_list": [ 00:14:26.447 { 00:14:26.447 "name": null, 00:14:26.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.447 "is_configured": false, 00:14:26.447 "data_offset": 0, 00:14:26.447 "data_size": 65536 00:14:26.447 }, 00:14:26.447 { 00:14:26.447 "name": "BaseBdev2", 00:14:26.447 "uuid": "d89d014c-5a51-4be1-a34d-ef1fc079def6", 00:14:26.447 "is_configured": true, 00:14:26.447 "data_offset": 0, 00:14:26.447 "data_size": 65536 00:14:26.447 }, 00:14:26.447 { 00:14:26.447 "name": "BaseBdev3", 00:14:26.447 "uuid": "253c8e54-dc69-47cb-9b2e-28e91dbbf959", 00:14:26.447 "is_configured": true, 00:14:26.447 "data_offset": 0, 00:14:26.447 "data_size": 65536 00:14:26.447 }, 00:14:26.447 { 00:14:26.447 "name": "BaseBdev4", 00:14:26.447 "uuid": "ae402769-6ff4-4539-abba-c18d2f16768c", 00:14:26.447 "is_configured": true, 00:14:26.447 "data_offset": 0, 00:14:26.447 "data_size": 65536 00:14:26.447 } 00:14:26.447 ] 00:14:26.447 }' 00:14:26.447 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.447 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 [2024-09-27 22:31:22.670827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.014 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.014 [2024-09-27 22:31:22.846556] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.273 22:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.273 [2024-09-27 22:31:22.999453] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:27.273 [2024-09-27 22:31:22.999589] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.273 [2024-09-27 22:31:23.104806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.273 [2024-09-27 22:31:23.104878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.273 [2024-09-27 22:31:23.104895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.273 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.532 BaseBdev2 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.532 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.532 [ 00:14:27.532 { 00:14:27.532 "name": "BaseBdev2", 00:14:27.532 "aliases": [ 00:14:27.533 "b45499f5-f46e-47c9-b93f-56bd1aadeb21" 00:14:27.533 ], 00:14:27.533 "product_name": "Malloc disk", 00:14:27.533 "block_size": 512, 00:14:27.533 "num_blocks": 65536, 00:14:27.533 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:27.533 "assigned_rate_limits": { 00:14:27.533 "rw_ios_per_sec": 0, 00:14:27.533 "rw_mbytes_per_sec": 0, 00:14:27.533 "r_mbytes_per_sec": 0, 00:14:27.533 "w_mbytes_per_sec": 0 00:14:27.533 }, 00:14:27.533 "claimed": false, 00:14:27.533 "zoned": false, 00:14:27.533 "supported_io_types": { 00:14:27.533 "read": true, 00:14:27.533 "write": true, 00:14:27.533 "unmap": true, 00:14:27.533 "flush": true, 00:14:27.533 "reset": true, 00:14:27.533 "nvme_admin": false, 00:14:27.533 "nvme_io": false, 00:14:27.533 "nvme_io_md": false, 00:14:27.533 "write_zeroes": true, 00:14:27.533 "zcopy": true, 00:14:27.533 "get_zone_info": false, 00:14:27.533 "zone_management": false, 00:14:27.533 "zone_append": false, 00:14:27.533 "compare": false, 00:14:27.533 "compare_and_write": false, 00:14:27.533 "abort": true, 00:14:27.533 "seek_hole": false, 00:14:27.533 "seek_data": false, 00:14:27.533 "copy": true, 00:14:27.533 "nvme_iov_md": false 00:14:27.533 }, 00:14:27.533 "memory_domains": [ 00:14:27.533 { 00:14:27.533 "dma_device_id": "system", 00:14:27.533 "dma_device_type": 1 00:14:27.533 }, 00:14:27.533 { 00:14:27.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.533 "dma_device_type": 2 00:14:27.533 } 00:14:27.533 ], 00:14:27.533 "driver_specific": {} 00:14:27.533 } 00:14:27.533 ] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.533 BaseBdev3 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.533 [ 00:14:27.533 { 00:14:27.533 "name": "BaseBdev3", 00:14:27.533 "aliases": [ 00:14:27.533 "602f8164-c143-43e3-ba06-5641b95233c9" 00:14:27.533 ], 00:14:27.533 "product_name": "Malloc disk", 00:14:27.533 "block_size": 512, 00:14:27.533 "num_blocks": 65536, 00:14:27.533 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:27.533 "assigned_rate_limits": { 00:14:27.533 "rw_ios_per_sec": 0, 00:14:27.533 "rw_mbytes_per_sec": 0, 00:14:27.533 "r_mbytes_per_sec": 0, 00:14:27.533 "w_mbytes_per_sec": 0 00:14:27.533 }, 00:14:27.533 "claimed": false, 00:14:27.533 "zoned": false, 00:14:27.533 "supported_io_types": { 00:14:27.533 "read": true, 00:14:27.533 "write": true, 00:14:27.533 "unmap": true, 00:14:27.533 "flush": true, 00:14:27.533 "reset": true, 00:14:27.533 "nvme_admin": false, 00:14:27.533 "nvme_io": false, 00:14:27.533 "nvme_io_md": false, 00:14:27.533 "write_zeroes": true, 00:14:27.533 "zcopy": true, 00:14:27.533 "get_zone_info": false, 00:14:27.533 "zone_management": false, 00:14:27.533 "zone_append": false, 00:14:27.533 "compare": false, 00:14:27.533 "compare_and_write": false, 00:14:27.533 "abort": true, 00:14:27.533 "seek_hole": false, 00:14:27.533 "seek_data": false, 00:14:27.533 "copy": true, 00:14:27.533 "nvme_iov_md": false 00:14:27.533 }, 00:14:27.533 "memory_domains": [ 00:14:27.533 { 00:14:27.533 "dma_device_id": "system", 00:14:27.533 "dma_device_type": 1 00:14:27.533 }, 00:14:27.533 { 00:14:27.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.533 "dma_device_type": 2 00:14:27.533 } 00:14:27.533 ], 00:14:27.533 "driver_specific": {} 00:14:27.533 } 00:14:27.533 ] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.533 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 BaseBdev4 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 [ 00:14:27.792 { 00:14:27.792 "name": "BaseBdev4", 00:14:27.792 "aliases": [ 00:14:27.792 "2cc58b78-1097-4b50-91ce-bd692648feeb" 00:14:27.792 ], 00:14:27.792 "product_name": "Malloc disk", 00:14:27.792 "block_size": 512, 00:14:27.792 "num_blocks": 65536, 00:14:27.792 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:27.792 "assigned_rate_limits": { 00:14:27.792 "rw_ios_per_sec": 0, 00:14:27.792 "rw_mbytes_per_sec": 0, 00:14:27.792 "r_mbytes_per_sec": 0, 00:14:27.792 "w_mbytes_per_sec": 0 00:14:27.792 }, 00:14:27.792 "claimed": false, 00:14:27.792 "zoned": false, 00:14:27.792 "supported_io_types": { 00:14:27.792 "read": true, 00:14:27.792 "write": true, 00:14:27.792 "unmap": true, 00:14:27.792 "flush": true, 00:14:27.792 "reset": true, 00:14:27.792 "nvme_admin": false, 00:14:27.792 "nvme_io": false, 00:14:27.792 "nvme_io_md": false, 00:14:27.792 "write_zeroes": true, 00:14:27.792 "zcopy": true, 00:14:27.792 "get_zone_info": false, 00:14:27.792 "zone_management": false, 00:14:27.792 "zone_append": false, 00:14:27.792 "compare": false, 00:14:27.792 "compare_and_write": false, 00:14:27.792 "abort": true, 00:14:27.792 "seek_hole": false, 00:14:27.792 "seek_data": false, 00:14:27.792 "copy": true, 00:14:27.792 "nvme_iov_md": false 00:14:27.792 }, 00:14:27.792 "memory_domains": [ 00:14:27.792 { 00:14:27.792 "dma_device_id": "system", 00:14:27.792 "dma_device_type": 1 00:14:27.792 }, 00:14:27.792 { 00:14:27.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.792 "dma_device_type": 2 00:14:27.792 } 00:14:27.792 ], 00:14:27.792 "driver_specific": {} 00:14:27.792 } 00:14:27.792 ] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 [2024-09-27 22:31:23.461320] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.792 [2024-09-27 22:31:23.461508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.792 [2024-09-27 22:31:23.461633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.792 [2024-09-27 22:31:23.464041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.792 [2024-09-27 22:31:23.464243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.792 "name": "Existed_Raid", 00:14:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.792 "strip_size_kb": 0, 00:14:27.792 "state": "configuring", 00:14:27.792 "raid_level": "raid1", 00:14:27.792 "superblock": false, 00:14:27.792 "num_base_bdevs": 4, 00:14:27.792 "num_base_bdevs_discovered": 3, 00:14:27.792 "num_base_bdevs_operational": 4, 00:14:27.792 "base_bdevs_list": [ 00:14:27.792 { 00:14:27.792 "name": "BaseBdev1", 00:14:27.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.792 "is_configured": false, 00:14:27.792 "data_offset": 0, 00:14:27.792 "data_size": 0 00:14:27.792 }, 00:14:27.792 { 00:14:27.792 "name": "BaseBdev2", 00:14:27.792 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:27.792 "is_configured": true, 00:14:27.792 "data_offset": 0, 00:14:27.792 "data_size": 65536 00:14:27.792 }, 00:14:27.792 { 00:14:27.792 "name": "BaseBdev3", 00:14:27.792 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:27.792 "is_configured": true, 00:14:27.792 "data_offset": 0, 00:14:27.792 "data_size": 65536 00:14:27.792 }, 00:14:27.792 { 00:14:27.792 "name": "BaseBdev4", 00:14:27.792 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:27.792 "is_configured": true, 00:14:27.792 "data_offset": 0, 00:14:27.792 "data_size": 65536 00:14:27.792 } 00:14:27.792 ] 00:14:27.792 }' 00:14:27.792 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.793 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.050 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:28.050 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.050 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.308 [2024-09-27 22:31:23.924661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.308 "name": "Existed_Raid", 00:14:28.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.308 "strip_size_kb": 0, 00:14:28.308 "state": "configuring", 00:14:28.308 "raid_level": "raid1", 00:14:28.308 "superblock": false, 00:14:28.308 "num_base_bdevs": 4, 00:14:28.308 "num_base_bdevs_discovered": 2, 00:14:28.308 "num_base_bdevs_operational": 4, 00:14:28.308 "base_bdevs_list": [ 00:14:28.308 { 00:14:28.308 "name": "BaseBdev1", 00:14:28.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.308 "is_configured": false, 00:14:28.308 "data_offset": 0, 00:14:28.308 "data_size": 0 00:14:28.308 }, 00:14:28.308 { 00:14:28.308 "name": null, 00:14:28.308 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:28.308 "is_configured": false, 00:14:28.308 "data_offset": 0, 00:14:28.308 "data_size": 65536 00:14:28.308 }, 00:14:28.308 { 00:14:28.308 "name": "BaseBdev3", 00:14:28.308 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:28.308 "is_configured": true, 00:14:28.308 "data_offset": 0, 00:14:28.308 "data_size": 65536 00:14:28.308 }, 00:14:28.308 { 00:14:28.308 "name": "BaseBdev4", 00:14:28.308 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:28.308 "is_configured": true, 00:14:28.308 "data_offset": 0, 00:14:28.308 "data_size": 65536 00:14:28.308 } 00:14:28.308 ] 00:14:28.308 }' 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.308 22:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 [2024-09-27 22:31:24.434234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.566 BaseBdev1 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.566 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.826 [ 00:14:28.826 { 00:14:28.826 "name": "BaseBdev1", 00:14:28.826 "aliases": [ 00:14:28.826 "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e" 00:14:28.826 ], 00:14:28.826 "product_name": "Malloc disk", 00:14:28.826 "block_size": 512, 00:14:28.826 "num_blocks": 65536, 00:14:28.826 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:28.826 "assigned_rate_limits": { 00:14:28.826 "rw_ios_per_sec": 0, 00:14:28.826 "rw_mbytes_per_sec": 0, 00:14:28.826 "r_mbytes_per_sec": 0, 00:14:28.826 "w_mbytes_per_sec": 0 00:14:28.826 }, 00:14:28.826 "claimed": true, 00:14:28.826 "claim_type": "exclusive_write", 00:14:28.826 "zoned": false, 00:14:28.826 "supported_io_types": { 00:14:28.826 "read": true, 00:14:28.826 "write": true, 00:14:28.826 "unmap": true, 00:14:28.826 "flush": true, 00:14:28.826 "reset": true, 00:14:28.826 "nvme_admin": false, 00:14:28.826 "nvme_io": false, 00:14:28.826 "nvme_io_md": false, 00:14:28.826 "write_zeroes": true, 00:14:28.826 "zcopy": true, 00:14:28.826 "get_zone_info": false, 00:14:28.826 "zone_management": false, 00:14:28.826 "zone_append": false, 00:14:28.826 "compare": false, 00:14:28.826 "compare_and_write": false, 00:14:28.826 "abort": true, 00:14:28.826 "seek_hole": false, 00:14:28.826 "seek_data": false, 00:14:28.826 "copy": true, 00:14:28.826 "nvme_iov_md": false 00:14:28.826 }, 00:14:28.826 "memory_domains": [ 00:14:28.826 { 00:14:28.826 "dma_device_id": "system", 00:14:28.826 "dma_device_type": 1 00:14:28.826 }, 00:14:28.826 { 00:14:28.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.826 "dma_device_type": 2 00:14:28.826 } 00:14:28.826 ], 00:14:28.826 "driver_specific": {} 00:14:28.826 } 00:14:28.826 ] 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.826 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.826 "name": "Existed_Raid", 00:14:28.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.826 "strip_size_kb": 0, 00:14:28.826 "state": "configuring", 00:14:28.826 "raid_level": "raid1", 00:14:28.826 "superblock": false, 00:14:28.826 "num_base_bdevs": 4, 00:14:28.826 "num_base_bdevs_discovered": 3, 00:14:28.826 "num_base_bdevs_operational": 4, 00:14:28.826 "base_bdevs_list": [ 00:14:28.826 { 00:14:28.826 "name": "BaseBdev1", 00:14:28.826 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:28.826 "is_configured": true, 00:14:28.826 "data_offset": 0, 00:14:28.826 "data_size": 65536 00:14:28.826 }, 00:14:28.826 { 00:14:28.827 "name": null, 00:14:28.827 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:28.827 "is_configured": false, 00:14:28.827 "data_offset": 0, 00:14:28.827 "data_size": 65536 00:14:28.827 }, 00:14:28.827 { 00:14:28.827 "name": "BaseBdev3", 00:14:28.827 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:28.827 "is_configured": true, 00:14:28.827 "data_offset": 0, 00:14:28.827 "data_size": 65536 00:14:28.827 }, 00:14:28.827 { 00:14:28.827 "name": "BaseBdev4", 00:14:28.827 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:28.827 "is_configured": true, 00:14:28.827 "data_offset": 0, 00:14:28.827 "data_size": 65536 00:14:28.827 } 00:14:28.827 ] 00:14:28.827 }' 00:14:28.827 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.827 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.087 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.087 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:29.087 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.087 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.345 [2024-09-27 22:31:24.989562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.345 22:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.345 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.345 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.345 "name": "Existed_Raid", 00:14:29.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.345 "strip_size_kb": 0, 00:14:29.345 "state": "configuring", 00:14:29.345 "raid_level": "raid1", 00:14:29.345 "superblock": false, 00:14:29.345 "num_base_bdevs": 4, 00:14:29.345 "num_base_bdevs_discovered": 2, 00:14:29.345 "num_base_bdevs_operational": 4, 00:14:29.345 "base_bdevs_list": [ 00:14:29.345 { 00:14:29.345 "name": "BaseBdev1", 00:14:29.345 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:29.345 "is_configured": true, 00:14:29.345 "data_offset": 0, 00:14:29.345 "data_size": 65536 00:14:29.345 }, 00:14:29.345 { 00:14:29.345 "name": null, 00:14:29.345 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:29.345 "is_configured": false, 00:14:29.345 "data_offset": 0, 00:14:29.345 "data_size": 65536 00:14:29.345 }, 00:14:29.345 { 00:14:29.345 "name": null, 00:14:29.345 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:29.345 "is_configured": false, 00:14:29.345 "data_offset": 0, 00:14:29.345 "data_size": 65536 00:14:29.345 }, 00:14:29.345 { 00:14:29.345 "name": "BaseBdev4", 00:14:29.345 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:29.345 "is_configured": true, 00:14:29.345 "data_offset": 0, 00:14:29.345 "data_size": 65536 00:14:29.345 } 00:14:29.345 ] 00:14:29.345 }' 00:14:29.345 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.345 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.603 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.603 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.603 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.603 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:29.603 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.861 [2024-09-27 22:31:25.496886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.861 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.862 "name": "Existed_Raid", 00:14:29.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.862 "strip_size_kb": 0, 00:14:29.862 "state": "configuring", 00:14:29.862 "raid_level": "raid1", 00:14:29.862 "superblock": false, 00:14:29.862 "num_base_bdevs": 4, 00:14:29.862 "num_base_bdevs_discovered": 3, 00:14:29.862 "num_base_bdevs_operational": 4, 00:14:29.862 "base_bdevs_list": [ 00:14:29.862 { 00:14:29.862 "name": "BaseBdev1", 00:14:29.862 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:29.862 "is_configured": true, 00:14:29.862 "data_offset": 0, 00:14:29.862 "data_size": 65536 00:14:29.862 }, 00:14:29.862 { 00:14:29.862 "name": null, 00:14:29.862 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:29.862 "is_configured": false, 00:14:29.862 "data_offset": 0, 00:14:29.862 "data_size": 65536 00:14:29.862 }, 00:14:29.862 { 00:14:29.862 "name": "BaseBdev3", 00:14:29.862 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:29.862 "is_configured": true, 00:14:29.862 "data_offset": 0, 00:14:29.862 "data_size": 65536 00:14:29.862 }, 00:14:29.862 { 00:14:29.862 "name": "BaseBdev4", 00:14:29.862 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:29.862 "is_configured": true, 00:14:29.862 "data_offset": 0, 00:14:29.862 "data_size": 65536 00:14:29.862 } 00:14:29.862 ] 00:14:29.862 }' 00:14:29.862 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.862 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:30.121 22:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.121 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 22:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.381 [2024-09-27 22:31:26.020190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.381 "name": "Existed_Raid", 00:14:30.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.381 "strip_size_kb": 0, 00:14:30.381 "state": "configuring", 00:14:30.381 "raid_level": "raid1", 00:14:30.381 "superblock": false, 00:14:30.381 "num_base_bdevs": 4, 00:14:30.381 "num_base_bdevs_discovered": 2, 00:14:30.381 "num_base_bdevs_operational": 4, 00:14:30.381 "base_bdevs_list": [ 00:14:30.381 { 00:14:30.381 "name": null, 00:14:30.381 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:30.381 "is_configured": false, 00:14:30.381 "data_offset": 0, 00:14:30.381 "data_size": 65536 00:14:30.381 }, 00:14:30.381 { 00:14:30.381 "name": null, 00:14:30.381 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:30.381 "is_configured": false, 00:14:30.381 "data_offset": 0, 00:14:30.381 "data_size": 65536 00:14:30.381 }, 00:14:30.381 { 00:14:30.381 "name": "BaseBdev3", 00:14:30.381 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:30.381 "is_configured": true, 00:14:30.381 "data_offset": 0, 00:14:30.381 "data_size": 65536 00:14:30.381 }, 00:14:30.381 { 00:14:30.381 "name": "BaseBdev4", 00:14:30.381 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:30.381 "is_configured": true, 00:14:30.381 "data_offset": 0, 00:14:30.381 "data_size": 65536 00:14:30.381 } 00:14:30.381 ] 00:14:30.381 }' 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.381 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.947 [2024-09-27 22:31:26.648913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.947 "name": "Existed_Raid", 00:14:30.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.947 "strip_size_kb": 0, 00:14:30.947 "state": "configuring", 00:14:30.947 "raid_level": "raid1", 00:14:30.947 "superblock": false, 00:14:30.947 "num_base_bdevs": 4, 00:14:30.947 "num_base_bdevs_discovered": 3, 00:14:30.947 "num_base_bdevs_operational": 4, 00:14:30.947 "base_bdevs_list": [ 00:14:30.947 { 00:14:30.947 "name": null, 00:14:30.947 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:30.947 "is_configured": false, 00:14:30.947 "data_offset": 0, 00:14:30.947 "data_size": 65536 00:14:30.947 }, 00:14:30.947 { 00:14:30.947 "name": "BaseBdev2", 00:14:30.947 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:30.947 "is_configured": true, 00:14:30.947 "data_offset": 0, 00:14:30.947 "data_size": 65536 00:14:30.947 }, 00:14:30.947 { 00:14:30.947 "name": "BaseBdev3", 00:14:30.947 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:30.947 "is_configured": true, 00:14:30.947 "data_offset": 0, 00:14:30.947 "data_size": 65536 00:14:30.947 }, 00:14:30.947 { 00:14:30.947 "name": "BaseBdev4", 00:14:30.947 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:30.947 "is_configured": true, 00:14:30.947 "data_offset": 0, 00:14:30.947 "data_size": 65536 00:14:30.947 } 00:14:30.947 ] 00:14:30.947 }' 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.947 22:31:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 [2024-09-27 22:31:27.255267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:31.513 [2024-09-27 22:31:27.255329] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:31.513 [2024-09-27 22:31:27.255342] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:31.513 [2024-09-27 22:31:27.255667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:31.513 [2024-09-27 22:31:27.255834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:31.513 [2024-09-27 22:31:27.255846] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:31.513 [2024-09-27 22:31:27.256173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.513 NewBaseBdev 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 [ 00:14:31.513 { 00:14:31.513 "name": "NewBaseBdev", 00:14:31.513 "aliases": [ 00:14:31.513 "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e" 00:14:31.513 ], 00:14:31.513 "product_name": "Malloc disk", 00:14:31.513 "block_size": 512, 00:14:31.513 "num_blocks": 65536, 00:14:31.513 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:31.513 "assigned_rate_limits": { 00:14:31.513 "rw_ios_per_sec": 0, 00:14:31.513 "rw_mbytes_per_sec": 0, 00:14:31.513 "r_mbytes_per_sec": 0, 00:14:31.513 "w_mbytes_per_sec": 0 00:14:31.513 }, 00:14:31.513 "claimed": true, 00:14:31.513 "claim_type": "exclusive_write", 00:14:31.513 "zoned": false, 00:14:31.513 "supported_io_types": { 00:14:31.513 "read": true, 00:14:31.513 "write": true, 00:14:31.513 "unmap": true, 00:14:31.513 "flush": true, 00:14:31.513 "reset": true, 00:14:31.513 "nvme_admin": false, 00:14:31.513 "nvme_io": false, 00:14:31.513 "nvme_io_md": false, 00:14:31.513 "write_zeroes": true, 00:14:31.513 "zcopy": true, 00:14:31.513 "get_zone_info": false, 00:14:31.513 "zone_management": false, 00:14:31.513 "zone_append": false, 00:14:31.513 "compare": false, 00:14:31.513 "compare_and_write": false, 00:14:31.513 "abort": true, 00:14:31.513 "seek_hole": false, 00:14:31.513 "seek_data": false, 00:14:31.513 "copy": true, 00:14:31.513 "nvme_iov_md": false 00:14:31.513 }, 00:14:31.513 "memory_domains": [ 00:14:31.513 { 00:14:31.513 "dma_device_id": "system", 00:14:31.513 "dma_device_type": 1 00:14:31.513 }, 00:14:31.513 { 00:14:31.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.513 "dma_device_type": 2 00:14:31.513 } 00:14:31.513 ], 00:14:31.513 "driver_specific": {} 00:14:31.513 } 00:14:31.513 ] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.513 "name": "Existed_Raid", 00:14:31.513 "uuid": "b1299e73-c415-4f89-a888-ec6c8600963e", 00:14:31.513 "strip_size_kb": 0, 00:14:31.513 "state": "online", 00:14:31.513 "raid_level": "raid1", 00:14:31.513 "superblock": false, 00:14:31.513 "num_base_bdevs": 4, 00:14:31.513 "num_base_bdevs_discovered": 4, 00:14:31.513 "num_base_bdevs_operational": 4, 00:14:31.513 "base_bdevs_list": [ 00:14:31.513 { 00:14:31.513 "name": "NewBaseBdev", 00:14:31.513 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:31.513 "is_configured": true, 00:14:31.513 "data_offset": 0, 00:14:31.513 "data_size": 65536 00:14:31.513 }, 00:14:31.513 { 00:14:31.513 "name": "BaseBdev2", 00:14:31.513 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:31.513 "is_configured": true, 00:14:31.513 "data_offset": 0, 00:14:31.513 "data_size": 65536 00:14:31.513 }, 00:14:31.513 { 00:14:31.513 "name": "BaseBdev3", 00:14:31.513 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:31.513 "is_configured": true, 00:14:31.513 "data_offset": 0, 00:14:31.513 "data_size": 65536 00:14:31.513 }, 00:14:31.513 { 00:14:31.513 "name": "BaseBdev4", 00:14:31.513 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:31.513 "is_configured": true, 00:14:31.513 "data_offset": 0, 00:14:31.513 "data_size": 65536 00:14:31.513 } 00:14:31.513 ] 00:14:31.513 }' 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.513 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.079 [2024-09-27 22:31:27.743083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.079 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:32.079 "name": "Existed_Raid", 00:14:32.079 "aliases": [ 00:14:32.079 "b1299e73-c415-4f89-a888-ec6c8600963e" 00:14:32.079 ], 00:14:32.079 "product_name": "Raid Volume", 00:14:32.079 "block_size": 512, 00:14:32.079 "num_blocks": 65536, 00:14:32.079 "uuid": "b1299e73-c415-4f89-a888-ec6c8600963e", 00:14:32.079 "assigned_rate_limits": { 00:14:32.079 "rw_ios_per_sec": 0, 00:14:32.079 "rw_mbytes_per_sec": 0, 00:14:32.079 "r_mbytes_per_sec": 0, 00:14:32.079 "w_mbytes_per_sec": 0 00:14:32.079 }, 00:14:32.079 "claimed": false, 00:14:32.079 "zoned": false, 00:14:32.079 "supported_io_types": { 00:14:32.079 "read": true, 00:14:32.079 "write": true, 00:14:32.079 "unmap": false, 00:14:32.079 "flush": false, 00:14:32.079 "reset": true, 00:14:32.079 "nvme_admin": false, 00:14:32.079 "nvme_io": false, 00:14:32.079 "nvme_io_md": false, 00:14:32.079 "write_zeroes": true, 00:14:32.079 "zcopy": false, 00:14:32.079 "get_zone_info": false, 00:14:32.079 "zone_management": false, 00:14:32.079 "zone_append": false, 00:14:32.079 "compare": false, 00:14:32.079 "compare_and_write": false, 00:14:32.079 "abort": false, 00:14:32.079 "seek_hole": false, 00:14:32.079 "seek_data": false, 00:14:32.079 "copy": false, 00:14:32.079 "nvme_iov_md": false 00:14:32.079 }, 00:14:32.079 "memory_domains": [ 00:14:32.079 { 00:14:32.079 "dma_device_id": "system", 00:14:32.079 "dma_device_type": 1 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.079 "dma_device_type": 2 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "dma_device_id": "system", 00:14:32.079 "dma_device_type": 1 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.079 "dma_device_type": 2 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "dma_device_id": "system", 00:14:32.079 "dma_device_type": 1 00:14:32.079 }, 00:14:32.079 { 00:14:32.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.080 "dma_device_type": 2 00:14:32.080 }, 00:14:32.080 { 00:14:32.080 "dma_device_id": "system", 00:14:32.080 "dma_device_type": 1 00:14:32.080 }, 00:14:32.080 { 00:14:32.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.080 "dma_device_type": 2 00:14:32.080 } 00:14:32.080 ], 00:14:32.080 "driver_specific": { 00:14:32.080 "raid": { 00:14:32.080 "uuid": "b1299e73-c415-4f89-a888-ec6c8600963e", 00:14:32.080 "strip_size_kb": 0, 00:14:32.080 "state": "online", 00:14:32.080 "raid_level": "raid1", 00:14:32.080 "superblock": false, 00:14:32.080 "num_base_bdevs": 4, 00:14:32.080 "num_base_bdevs_discovered": 4, 00:14:32.080 "num_base_bdevs_operational": 4, 00:14:32.080 "base_bdevs_list": [ 00:14:32.080 { 00:14:32.080 "name": "NewBaseBdev", 00:14:32.080 "uuid": "21f8f3cc-bf1b-4bcd-bb3f-e166e1e85f6e", 00:14:32.080 "is_configured": true, 00:14:32.080 "data_offset": 0, 00:14:32.080 "data_size": 65536 00:14:32.080 }, 00:14:32.080 { 00:14:32.080 "name": "BaseBdev2", 00:14:32.080 "uuid": "b45499f5-f46e-47c9-b93f-56bd1aadeb21", 00:14:32.080 "is_configured": true, 00:14:32.080 "data_offset": 0, 00:14:32.080 "data_size": 65536 00:14:32.080 }, 00:14:32.080 { 00:14:32.080 "name": "BaseBdev3", 00:14:32.080 "uuid": "602f8164-c143-43e3-ba06-5641b95233c9", 00:14:32.080 "is_configured": true, 00:14:32.080 "data_offset": 0, 00:14:32.080 "data_size": 65536 00:14:32.080 }, 00:14:32.080 { 00:14:32.080 "name": "BaseBdev4", 00:14:32.080 "uuid": "2cc58b78-1097-4b50-91ce-bd692648feeb", 00:14:32.080 "is_configured": true, 00:14:32.080 "data_offset": 0, 00:14:32.080 "data_size": 65536 00:14:32.080 } 00:14:32.080 ] 00:14:32.080 } 00:14:32.080 } 00:14:32.080 }' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:32.080 BaseBdev2 00:14:32.080 BaseBdev3 00:14:32.080 BaseBdev4' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.080 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.338 22:31:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.338 [2024-09-27 22:31:28.050332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.338 [2024-09-27 22:31:28.050373] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.338 [2024-09-27 22:31:28.050464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.338 [2024-09-27 22:31:28.050802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.338 [2024-09-27 22:31:28.050819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73968 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73968 ']' 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73968 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73968 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.338 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.339 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73968' 00:14:32.339 killing process with pid 73968 00:14:32.339 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73968 00:14:32.339 [2024-09-27 22:31:28.104281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.339 22:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73968 00:14:32.903 [2024-09-27 22:31:28.550628] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:35.432 ************************************ 00:14:35.432 END TEST raid_state_function_test 00:14:35.432 ************************************ 00:14:35.432 00:14:35.432 real 0m13.447s 00:14:35.432 user 0m20.504s 00:14:35.432 sys 0m2.419s 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.432 22:31:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:35.432 22:31:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:35.432 22:31:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.432 22:31:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.432 ************************************ 00:14:35.432 START TEST raid_state_function_test_sb 00:14:35.432 ************************************ 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:35.432 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74656 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74656' 00:14:35.433 Process raid pid: 74656 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74656 00:14:35.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74656 ']' 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.433 22:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.433 [2024-09-27 22:31:30.889716] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:35.433 [2024-09-27 22:31:30.890127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.433 [2024-09-27 22:31:31.066208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.691 [2024-09-27 22:31:31.321410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.949 [2024-09-27 22:31:31.578531] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.949 [2024-09-27 22:31:31.578571] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.517 [2024-09-27 22:31:32.099109] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.517 [2024-09-27 22:31:32.099173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.517 [2024-09-27 22:31:32.099186] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.517 [2024-09-27 22:31:32.099201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.517 [2024-09-27 22:31:32.099210] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.517 [2024-09-27 22:31:32.099223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.517 [2024-09-27 22:31:32.099231] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.517 [2024-09-27 22:31:32.099243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.517 "name": "Existed_Raid", 00:14:36.517 "uuid": "9e25d2f0-2356-4d86-a908-77e28c1c9f38", 00:14:36.517 "strip_size_kb": 0, 00:14:36.517 "state": "configuring", 00:14:36.517 "raid_level": "raid1", 00:14:36.517 "superblock": true, 00:14:36.517 "num_base_bdevs": 4, 00:14:36.517 "num_base_bdevs_discovered": 0, 00:14:36.517 "num_base_bdevs_operational": 4, 00:14:36.517 "base_bdevs_list": [ 00:14:36.517 { 00:14:36.517 "name": "BaseBdev1", 00:14:36.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.517 "is_configured": false, 00:14:36.517 "data_offset": 0, 00:14:36.517 "data_size": 0 00:14:36.517 }, 00:14:36.517 { 00:14:36.517 "name": "BaseBdev2", 00:14:36.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.517 "is_configured": false, 00:14:36.517 "data_offset": 0, 00:14:36.517 "data_size": 0 00:14:36.517 }, 00:14:36.517 { 00:14:36.517 "name": "BaseBdev3", 00:14:36.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.517 "is_configured": false, 00:14:36.517 "data_offset": 0, 00:14:36.517 "data_size": 0 00:14:36.517 }, 00:14:36.517 { 00:14:36.517 "name": "BaseBdev4", 00:14:36.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.517 "is_configured": false, 00:14:36.517 "data_offset": 0, 00:14:36.517 "data_size": 0 00:14:36.517 } 00:14:36.517 ] 00:14:36.517 }' 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.517 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.776 [2024-09-27 22:31:32.542363] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.776 [2024-09-27 22:31:32.542593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.776 [2024-09-27 22:31:32.554398] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.776 [2024-09-27 22:31:32.554462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.776 [2024-09-27 22:31:32.554473] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.776 [2024-09-27 22:31:32.554487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.776 [2024-09-27 22:31:32.554495] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.776 [2024-09-27 22:31:32.554508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.776 [2024-09-27 22:31:32.554516] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.776 [2024-09-27 22:31:32.554528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.776 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 [2024-09-27 22:31:32.610895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.777 BaseBdev1 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.777 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 [ 00:14:36.777 { 00:14:36.777 "name": "BaseBdev1", 00:14:36.777 "aliases": [ 00:14:36.777 "a8afcd06-8b81-4557-80cc-a27ebdf863c5" 00:14:36.777 ], 00:14:36.777 "product_name": "Malloc disk", 00:14:36.777 "block_size": 512, 00:14:36.777 "num_blocks": 65536, 00:14:36.777 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:36.777 "assigned_rate_limits": { 00:14:36.777 "rw_ios_per_sec": 0, 00:14:36.777 "rw_mbytes_per_sec": 0, 00:14:36.777 "r_mbytes_per_sec": 0, 00:14:36.777 "w_mbytes_per_sec": 0 00:14:36.777 }, 00:14:36.777 "claimed": true, 00:14:36.777 "claim_type": "exclusive_write", 00:14:36.777 "zoned": false, 00:14:36.777 "supported_io_types": { 00:14:36.777 "read": true, 00:14:36.777 "write": true, 00:14:36.777 "unmap": true, 00:14:36.777 "flush": true, 00:14:36.777 "reset": true, 00:14:36.777 "nvme_admin": false, 00:14:36.777 "nvme_io": false, 00:14:36.777 "nvme_io_md": false, 00:14:36.777 "write_zeroes": true, 00:14:36.777 "zcopy": true, 00:14:36.777 "get_zone_info": false, 00:14:36.777 "zone_management": false, 00:14:36.777 "zone_append": false, 00:14:36.777 "compare": false, 00:14:36.777 "compare_and_write": false, 00:14:36.777 "abort": true, 00:14:36.777 "seek_hole": false, 00:14:37.036 "seek_data": false, 00:14:37.036 "copy": true, 00:14:37.036 "nvme_iov_md": false 00:14:37.036 }, 00:14:37.036 "memory_domains": [ 00:14:37.036 { 00:14:37.036 "dma_device_id": "system", 00:14:37.036 "dma_device_type": 1 00:14:37.036 }, 00:14:37.036 { 00:14:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.036 "dma_device_type": 2 00:14:37.036 } 00:14:37.036 ], 00:14:37.036 "driver_specific": {} 00:14:37.036 } 00:14:37.036 ] 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.036 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.036 "name": "Existed_Raid", 00:14:37.036 "uuid": "0a70118c-ecae-408e-a470-42e586375c08", 00:14:37.036 "strip_size_kb": 0, 00:14:37.036 "state": "configuring", 00:14:37.036 "raid_level": "raid1", 00:14:37.036 "superblock": true, 00:14:37.036 "num_base_bdevs": 4, 00:14:37.036 "num_base_bdevs_discovered": 1, 00:14:37.036 "num_base_bdevs_operational": 4, 00:14:37.036 "base_bdevs_list": [ 00:14:37.036 { 00:14:37.036 "name": "BaseBdev1", 00:14:37.036 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:37.036 "is_configured": true, 00:14:37.036 "data_offset": 2048, 00:14:37.036 "data_size": 63488 00:14:37.036 }, 00:14:37.036 { 00:14:37.036 "name": "BaseBdev2", 00:14:37.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.036 "is_configured": false, 00:14:37.036 "data_offset": 0, 00:14:37.036 "data_size": 0 00:14:37.036 }, 00:14:37.037 { 00:14:37.037 "name": "BaseBdev3", 00:14:37.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.037 "is_configured": false, 00:14:37.037 "data_offset": 0, 00:14:37.037 "data_size": 0 00:14:37.037 }, 00:14:37.037 { 00:14:37.037 "name": "BaseBdev4", 00:14:37.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.037 "is_configured": false, 00:14:37.037 "data_offset": 0, 00:14:37.037 "data_size": 0 00:14:37.037 } 00:14:37.037 ] 00:14:37.037 }' 00:14:37.037 22:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.037 22:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.295 [2024-09-27 22:31:33.102297] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.295 [2024-09-27 22:31:33.102372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.295 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.295 [2024-09-27 22:31:33.114370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.295 [2024-09-27 22:31:33.116733] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.296 [2024-09-27 22:31:33.116792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.296 [2024-09-27 22:31:33.116803] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.296 [2024-09-27 22:31:33.116819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.296 [2024-09-27 22:31:33.116828] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:37.296 [2024-09-27 22:31:33.116841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.555 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.555 "name": "Existed_Raid", 00:14:37.555 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:37.555 "strip_size_kb": 0, 00:14:37.555 "state": "configuring", 00:14:37.555 "raid_level": "raid1", 00:14:37.555 "superblock": true, 00:14:37.555 "num_base_bdevs": 4, 00:14:37.555 "num_base_bdevs_discovered": 1, 00:14:37.555 "num_base_bdevs_operational": 4, 00:14:37.555 "base_bdevs_list": [ 00:14:37.555 { 00:14:37.555 "name": "BaseBdev1", 00:14:37.555 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 2048, 00:14:37.555 "data_size": 63488 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev2", 00:14:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.555 "is_configured": false, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 0 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev3", 00:14:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.555 "is_configured": false, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 0 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev4", 00:14:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.555 "is_configured": false, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 0 00:14:37.555 } 00:14:37.555 ] 00:14:37.555 }' 00:14:37.555 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.555 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.814 [2024-09-27 22:31:33.615560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.814 BaseBdev2 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.814 [ 00:14:37.814 { 00:14:37.814 "name": "BaseBdev2", 00:14:37.814 "aliases": [ 00:14:37.814 "e5c92294-958b-45fa-aabc-610183b54771" 00:14:37.814 ], 00:14:37.814 "product_name": "Malloc disk", 00:14:37.814 "block_size": 512, 00:14:37.814 "num_blocks": 65536, 00:14:37.814 "uuid": "e5c92294-958b-45fa-aabc-610183b54771", 00:14:37.814 "assigned_rate_limits": { 00:14:37.814 "rw_ios_per_sec": 0, 00:14:37.814 "rw_mbytes_per_sec": 0, 00:14:37.814 "r_mbytes_per_sec": 0, 00:14:37.814 "w_mbytes_per_sec": 0 00:14:37.814 }, 00:14:37.814 "claimed": true, 00:14:37.814 "claim_type": "exclusive_write", 00:14:37.814 "zoned": false, 00:14:37.814 "supported_io_types": { 00:14:37.814 "read": true, 00:14:37.814 "write": true, 00:14:37.814 "unmap": true, 00:14:37.814 "flush": true, 00:14:37.814 "reset": true, 00:14:37.814 "nvme_admin": false, 00:14:37.814 "nvme_io": false, 00:14:37.814 "nvme_io_md": false, 00:14:37.814 "write_zeroes": true, 00:14:37.814 "zcopy": true, 00:14:37.814 "get_zone_info": false, 00:14:37.814 "zone_management": false, 00:14:37.814 "zone_append": false, 00:14:37.814 "compare": false, 00:14:37.814 "compare_and_write": false, 00:14:37.814 "abort": true, 00:14:37.814 "seek_hole": false, 00:14:37.814 "seek_data": false, 00:14:37.814 "copy": true, 00:14:37.814 "nvme_iov_md": false 00:14:37.814 }, 00:14:37.814 "memory_domains": [ 00:14:37.814 { 00:14:37.814 "dma_device_id": "system", 00:14:37.814 "dma_device_type": 1 00:14:37.814 }, 00:14:37.814 { 00:14:37.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.814 "dma_device_type": 2 00:14:37.814 } 00:14:37.814 ], 00:14:37.814 "driver_specific": {} 00:14:37.814 } 00:14:37.814 ] 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.814 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.072 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.072 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.072 "name": "Existed_Raid", 00:14:38.072 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:38.072 "strip_size_kb": 0, 00:14:38.072 "state": "configuring", 00:14:38.072 "raid_level": "raid1", 00:14:38.072 "superblock": true, 00:14:38.072 "num_base_bdevs": 4, 00:14:38.072 "num_base_bdevs_discovered": 2, 00:14:38.072 "num_base_bdevs_operational": 4, 00:14:38.072 "base_bdevs_list": [ 00:14:38.072 { 00:14:38.072 "name": "BaseBdev1", 00:14:38.072 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:38.072 "is_configured": true, 00:14:38.072 "data_offset": 2048, 00:14:38.072 "data_size": 63488 00:14:38.072 }, 00:14:38.072 { 00:14:38.072 "name": "BaseBdev2", 00:14:38.072 "uuid": "e5c92294-958b-45fa-aabc-610183b54771", 00:14:38.072 "is_configured": true, 00:14:38.072 "data_offset": 2048, 00:14:38.072 "data_size": 63488 00:14:38.072 }, 00:14:38.072 { 00:14:38.072 "name": "BaseBdev3", 00:14:38.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.072 "is_configured": false, 00:14:38.072 "data_offset": 0, 00:14:38.072 "data_size": 0 00:14:38.072 }, 00:14:38.072 { 00:14:38.072 "name": "BaseBdev4", 00:14:38.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.072 "is_configured": false, 00:14:38.072 "data_offset": 0, 00:14:38.072 "data_size": 0 00:14:38.072 } 00:14:38.072 ] 00:14:38.072 }' 00:14:38.072 22:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.072 22:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.331 [2024-09-27 22:31:34.180946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.331 BaseBdev3 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.331 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.331 [ 00:14:38.331 { 00:14:38.331 "name": "BaseBdev3", 00:14:38.331 "aliases": [ 00:14:38.590 "d48c76a3-3239-4ae5-9231-8d4be7e6f01e" 00:14:38.590 ], 00:14:38.590 "product_name": "Malloc disk", 00:14:38.590 "block_size": 512, 00:14:38.590 "num_blocks": 65536, 00:14:38.590 "uuid": "d48c76a3-3239-4ae5-9231-8d4be7e6f01e", 00:14:38.590 "assigned_rate_limits": { 00:14:38.590 "rw_ios_per_sec": 0, 00:14:38.590 "rw_mbytes_per_sec": 0, 00:14:38.590 "r_mbytes_per_sec": 0, 00:14:38.590 "w_mbytes_per_sec": 0 00:14:38.590 }, 00:14:38.590 "claimed": true, 00:14:38.590 "claim_type": "exclusive_write", 00:14:38.590 "zoned": false, 00:14:38.590 "supported_io_types": { 00:14:38.590 "read": true, 00:14:38.590 "write": true, 00:14:38.590 "unmap": true, 00:14:38.590 "flush": true, 00:14:38.590 "reset": true, 00:14:38.590 "nvme_admin": false, 00:14:38.590 "nvme_io": false, 00:14:38.590 "nvme_io_md": false, 00:14:38.590 "write_zeroes": true, 00:14:38.590 "zcopy": true, 00:14:38.590 "get_zone_info": false, 00:14:38.590 "zone_management": false, 00:14:38.590 "zone_append": false, 00:14:38.590 "compare": false, 00:14:38.590 "compare_and_write": false, 00:14:38.590 "abort": true, 00:14:38.590 "seek_hole": false, 00:14:38.590 "seek_data": false, 00:14:38.590 "copy": true, 00:14:38.590 "nvme_iov_md": false 00:14:38.590 }, 00:14:38.590 "memory_domains": [ 00:14:38.590 { 00:14:38.590 "dma_device_id": "system", 00:14:38.590 "dma_device_type": 1 00:14:38.590 }, 00:14:38.591 { 00:14:38.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.591 "dma_device_type": 2 00:14:38.591 } 00:14:38.591 ], 00:14:38.591 "driver_specific": {} 00:14:38.591 } 00:14:38.591 ] 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.591 "name": "Existed_Raid", 00:14:38.591 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:38.591 "strip_size_kb": 0, 00:14:38.591 "state": "configuring", 00:14:38.591 "raid_level": "raid1", 00:14:38.591 "superblock": true, 00:14:38.591 "num_base_bdevs": 4, 00:14:38.591 "num_base_bdevs_discovered": 3, 00:14:38.591 "num_base_bdevs_operational": 4, 00:14:38.591 "base_bdevs_list": [ 00:14:38.591 { 00:14:38.591 "name": "BaseBdev1", 00:14:38.591 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:38.591 "is_configured": true, 00:14:38.591 "data_offset": 2048, 00:14:38.591 "data_size": 63488 00:14:38.591 }, 00:14:38.591 { 00:14:38.591 "name": "BaseBdev2", 00:14:38.591 "uuid": "e5c92294-958b-45fa-aabc-610183b54771", 00:14:38.591 "is_configured": true, 00:14:38.591 "data_offset": 2048, 00:14:38.591 "data_size": 63488 00:14:38.591 }, 00:14:38.591 { 00:14:38.591 "name": "BaseBdev3", 00:14:38.591 "uuid": "d48c76a3-3239-4ae5-9231-8d4be7e6f01e", 00:14:38.591 "is_configured": true, 00:14:38.591 "data_offset": 2048, 00:14:38.591 "data_size": 63488 00:14:38.591 }, 00:14:38.591 { 00:14:38.591 "name": "BaseBdev4", 00:14:38.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.591 "is_configured": false, 00:14:38.591 "data_offset": 0, 00:14:38.591 "data_size": 0 00:14:38.591 } 00:14:38.591 ] 00:14:38.591 }' 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.591 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.849 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:38.849 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.849 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 [2024-09-27 22:31:34.750218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.108 [2024-09-27 22:31:34.750502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.108 [2024-09-27 22:31:34.750523] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.108 [2024-09-27 22:31:34.750825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:39.108 BaseBdev4 00:14:39.108 [2024-09-27 22:31:34.751029] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.108 [2024-09-27 22:31:34.751047] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:39.108 [2024-09-27 22:31:34.751197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 [ 00:14:39.108 { 00:14:39.108 "name": "BaseBdev4", 00:14:39.108 "aliases": [ 00:14:39.108 "0e034a55-a0ae-4957-9f6d-3f0e798e57b5" 00:14:39.108 ], 00:14:39.108 "product_name": "Malloc disk", 00:14:39.108 "block_size": 512, 00:14:39.108 "num_blocks": 65536, 00:14:39.108 "uuid": "0e034a55-a0ae-4957-9f6d-3f0e798e57b5", 00:14:39.108 "assigned_rate_limits": { 00:14:39.108 "rw_ios_per_sec": 0, 00:14:39.108 "rw_mbytes_per_sec": 0, 00:14:39.108 "r_mbytes_per_sec": 0, 00:14:39.108 "w_mbytes_per_sec": 0 00:14:39.108 }, 00:14:39.108 "claimed": true, 00:14:39.108 "claim_type": "exclusive_write", 00:14:39.108 "zoned": false, 00:14:39.108 "supported_io_types": { 00:14:39.108 "read": true, 00:14:39.108 "write": true, 00:14:39.108 "unmap": true, 00:14:39.108 "flush": true, 00:14:39.108 "reset": true, 00:14:39.108 "nvme_admin": false, 00:14:39.108 "nvme_io": false, 00:14:39.108 "nvme_io_md": false, 00:14:39.108 "write_zeroes": true, 00:14:39.108 "zcopy": true, 00:14:39.108 "get_zone_info": false, 00:14:39.108 "zone_management": false, 00:14:39.108 "zone_append": false, 00:14:39.108 "compare": false, 00:14:39.108 "compare_and_write": false, 00:14:39.108 "abort": true, 00:14:39.108 "seek_hole": false, 00:14:39.108 "seek_data": false, 00:14:39.108 "copy": true, 00:14:39.108 "nvme_iov_md": false 00:14:39.108 }, 00:14:39.108 "memory_domains": [ 00:14:39.108 { 00:14:39.108 "dma_device_id": "system", 00:14:39.108 "dma_device_type": 1 00:14:39.108 }, 00:14:39.108 { 00:14:39.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.108 "dma_device_type": 2 00:14:39.108 } 00:14:39.108 ], 00:14:39.108 "driver_specific": {} 00:14:39.108 } 00:14:39.108 ] 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.108 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.109 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.109 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.109 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.109 "name": "Existed_Raid", 00:14:39.109 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:39.109 "strip_size_kb": 0, 00:14:39.109 "state": "online", 00:14:39.109 "raid_level": "raid1", 00:14:39.109 "superblock": true, 00:14:39.109 "num_base_bdevs": 4, 00:14:39.109 "num_base_bdevs_discovered": 4, 00:14:39.109 "num_base_bdevs_operational": 4, 00:14:39.109 "base_bdevs_list": [ 00:14:39.109 { 00:14:39.109 "name": "BaseBdev1", 00:14:39.109 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:39.109 "is_configured": true, 00:14:39.109 "data_offset": 2048, 00:14:39.109 "data_size": 63488 00:14:39.109 }, 00:14:39.109 { 00:14:39.109 "name": "BaseBdev2", 00:14:39.109 "uuid": "e5c92294-958b-45fa-aabc-610183b54771", 00:14:39.109 "is_configured": true, 00:14:39.109 "data_offset": 2048, 00:14:39.109 "data_size": 63488 00:14:39.109 }, 00:14:39.109 { 00:14:39.109 "name": "BaseBdev3", 00:14:39.109 "uuid": "d48c76a3-3239-4ae5-9231-8d4be7e6f01e", 00:14:39.109 "is_configured": true, 00:14:39.109 "data_offset": 2048, 00:14:39.109 "data_size": 63488 00:14:39.109 }, 00:14:39.109 { 00:14:39.109 "name": "BaseBdev4", 00:14:39.109 "uuid": "0e034a55-a0ae-4957-9f6d-3f0e798e57b5", 00:14:39.109 "is_configured": true, 00:14:39.109 "data_offset": 2048, 00:14:39.109 "data_size": 63488 00:14:39.109 } 00:14:39.109 ] 00:14:39.109 }' 00:14:39.109 22:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.109 22:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.676 [2024-09-27 22:31:35.277881] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.676 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.676 "name": "Existed_Raid", 00:14:39.676 "aliases": [ 00:14:39.676 "ee60e8dd-a451-4652-8615-d3330f4827ee" 00:14:39.676 ], 00:14:39.676 "product_name": "Raid Volume", 00:14:39.676 "block_size": 512, 00:14:39.676 "num_blocks": 63488, 00:14:39.676 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:39.676 "assigned_rate_limits": { 00:14:39.676 "rw_ios_per_sec": 0, 00:14:39.676 "rw_mbytes_per_sec": 0, 00:14:39.676 "r_mbytes_per_sec": 0, 00:14:39.676 "w_mbytes_per_sec": 0 00:14:39.676 }, 00:14:39.676 "claimed": false, 00:14:39.676 "zoned": false, 00:14:39.676 "supported_io_types": { 00:14:39.676 "read": true, 00:14:39.676 "write": true, 00:14:39.676 "unmap": false, 00:14:39.676 "flush": false, 00:14:39.676 "reset": true, 00:14:39.676 "nvme_admin": false, 00:14:39.676 "nvme_io": false, 00:14:39.676 "nvme_io_md": false, 00:14:39.676 "write_zeroes": true, 00:14:39.676 "zcopy": false, 00:14:39.676 "get_zone_info": false, 00:14:39.676 "zone_management": false, 00:14:39.676 "zone_append": false, 00:14:39.676 "compare": false, 00:14:39.676 "compare_and_write": false, 00:14:39.676 "abort": false, 00:14:39.676 "seek_hole": false, 00:14:39.676 "seek_data": false, 00:14:39.676 "copy": false, 00:14:39.676 "nvme_iov_md": false 00:14:39.676 }, 00:14:39.676 "memory_domains": [ 00:14:39.676 { 00:14:39.676 "dma_device_id": "system", 00:14:39.676 "dma_device_type": 1 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.676 "dma_device_type": 2 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "system", 00:14:39.676 "dma_device_type": 1 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.676 "dma_device_type": 2 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "system", 00:14:39.676 "dma_device_type": 1 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.676 "dma_device_type": 2 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "system", 00:14:39.676 "dma_device_type": 1 00:14:39.676 }, 00:14:39.676 { 00:14:39.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.676 "dma_device_type": 2 00:14:39.676 } 00:14:39.676 ], 00:14:39.676 "driver_specific": { 00:14:39.676 "raid": { 00:14:39.676 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:39.676 "strip_size_kb": 0, 00:14:39.676 "state": "online", 00:14:39.676 "raid_level": "raid1", 00:14:39.676 "superblock": true, 00:14:39.676 "num_base_bdevs": 4, 00:14:39.676 "num_base_bdevs_discovered": 4, 00:14:39.676 "num_base_bdevs_operational": 4, 00:14:39.676 "base_bdevs_list": [ 00:14:39.676 { 00:14:39.676 "name": "BaseBdev1", 00:14:39.676 "uuid": "a8afcd06-8b81-4557-80cc-a27ebdf863c5", 00:14:39.676 "is_configured": true, 00:14:39.676 "data_offset": 2048, 00:14:39.676 "data_size": 63488 00:14:39.676 }, 00:14:39.677 { 00:14:39.677 "name": "BaseBdev2", 00:14:39.677 "uuid": "e5c92294-958b-45fa-aabc-610183b54771", 00:14:39.677 "is_configured": true, 00:14:39.677 "data_offset": 2048, 00:14:39.677 "data_size": 63488 00:14:39.677 }, 00:14:39.677 { 00:14:39.677 "name": "BaseBdev3", 00:14:39.677 "uuid": "d48c76a3-3239-4ae5-9231-8d4be7e6f01e", 00:14:39.677 "is_configured": true, 00:14:39.677 "data_offset": 2048, 00:14:39.677 "data_size": 63488 00:14:39.677 }, 00:14:39.677 { 00:14:39.677 "name": "BaseBdev4", 00:14:39.677 "uuid": "0e034a55-a0ae-4957-9f6d-3f0e798e57b5", 00:14:39.677 "is_configured": true, 00:14:39.677 "data_offset": 2048, 00:14:39.677 "data_size": 63488 00:14:39.677 } 00:14:39.677 ] 00:14:39.677 } 00:14:39.677 } 00:14:39.677 }' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:39.677 BaseBdev2 00:14:39.677 BaseBdev3 00:14:39.677 BaseBdev4' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.677 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.935 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.936 [2024-09-27 22:31:35.621181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.936 "name": "Existed_Raid", 00:14:39.936 "uuid": "ee60e8dd-a451-4652-8615-d3330f4827ee", 00:14:39.936 "strip_size_kb": 0, 00:14:39.936 "state": "online", 00:14:39.936 "raid_level": "raid1", 00:14:39.936 "superblock": true, 00:14:39.936 "num_base_bdevs": 4, 00:14:39.936 "num_base_bdevs_discovered": 3, 00:14:39.936 "num_base_bdevs_operational": 3, 00:14:39.936 "base_bdevs_list": [ 00:14:39.936 { 00:14:39.936 "name": null, 00:14:39.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.936 "is_configured": false, 00:14:39.936 "data_offset": 0, 00:14:39.936 "data_size": 63488 00:14:39.936 }, 00:14:39.936 { 00:14:39.936 "name": "BaseBdev2", 00:14:39.936 "uuid": "e5c92294-958b-45fa-aabc-610183b54771", 00:14:39.936 "is_configured": true, 00:14:39.936 "data_offset": 2048, 00:14:39.936 "data_size": 63488 00:14:39.936 }, 00:14:39.936 { 00:14:39.936 "name": "BaseBdev3", 00:14:39.936 "uuid": "d48c76a3-3239-4ae5-9231-8d4be7e6f01e", 00:14:39.936 "is_configured": true, 00:14:39.936 "data_offset": 2048, 00:14:39.936 "data_size": 63488 00:14:39.936 }, 00:14:39.936 { 00:14:39.936 "name": "BaseBdev4", 00:14:39.936 "uuid": "0e034a55-a0ae-4957-9f6d-3f0e798e57b5", 00:14:39.936 "is_configured": true, 00:14:39.936 "data_offset": 2048, 00:14:39.936 "data_size": 63488 00:14:39.936 } 00:14:39.936 ] 00:14:39.936 }' 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.936 22:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.501 [2024-09-27 22:31:36.234536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.501 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.760 [2024-09-27 22:31:36.400662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.760 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.760 [2024-09-27 22:31:36.562771] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:40.760 [2024-09-27 22:31:36.562884] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.018 [2024-09-27 22:31:36.668879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.018 [2024-09-27 22:31:36.668951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.018 [2024-09-27 22:31:36.668967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:41.018 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.018 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:41.018 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 BaseBdev2 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 [ 00:14:41.019 { 00:14:41.019 "name": "BaseBdev2", 00:14:41.019 "aliases": [ 00:14:41.019 "111ec8e0-ed2c-4610-b995-9a9fc8a32f53" 00:14:41.019 ], 00:14:41.019 "product_name": "Malloc disk", 00:14:41.019 "block_size": 512, 00:14:41.019 "num_blocks": 65536, 00:14:41.019 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:41.019 "assigned_rate_limits": { 00:14:41.019 "rw_ios_per_sec": 0, 00:14:41.019 "rw_mbytes_per_sec": 0, 00:14:41.019 "r_mbytes_per_sec": 0, 00:14:41.019 "w_mbytes_per_sec": 0 00:14:41.019 }, 00:14:41.019 "claimed": false, 00:14:41.019 "zoned": false, 00:14:41.019 "supported_io_types": { 00:14:41.019 "read": true, 00:14:41.019 "write": true, 00:14:41.019 "unmap": true, 00:14:41.019 "flush": true, 00:14:41.019 "reset": true, 00:14:41.019 "nvme_admin": false, 00:14:41.019 "nvme_io": false, 00:14:41.019 "nvme_io_md": false, 00:14:41.019 "write_zeroes": true, 00:14:41.019 "zcopy": true, 00:14:41.019 "get_zone_info": false, 00:14:41.019 "zone_management": false, 00:14:41.019 "zone_append": false, 00:14:41.019 "compare": false, 00:14:41.019 "compare_and_write": false, 00:14:41.019 "abort": true, 00:14:41.019 "seek_hole": false, 00:14:41.019 "seek_data": false, 00:14:41.019 "copy": true, 00:14:41.019 "nvme_iov_md": false 00:14:41.019 }, 00:14:41.019 "memory_domains": [ 00:14:41.019 { 00:14:41.019 "dma_device_id": "system", 00:14:41.019 "dma_device_type": 1 00:14:41.019 }, 00:14:41.019 { 00:14:41.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.019 "dma_device_type": 2 00:14:41.019 } 00:14:41.019 ], 00:14:41.019 "driver_specific": {} 00:14:41.019 } 00:14:41.019 ] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 BaseBdev3 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.019 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 [ 00:14:41.278 { 00:14:41.278 "name": "BaseBdev3", 00:14:41.278 "aliases": [ 00:14:41.278 "65c0fa0a-c87f-4d84-95f1-3425d73ca50f" 00:14:41.278 ], 00:14:41.278 "product_name": "Malloc disk", 00:14:41.278 "block_size": 512, 00:14:41.278 "num_blocks": 65536, 00:14:41.278 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:41.278 "assigned_rate_limits": { 00:14:41.278 "rw_ios_per_sec": 0, 00:14:41.278 "rw_mbytes_per_sec": 0, 00:14:41.278 "r_mbytes_per_sec": 0, 00:14:41.278 "w_mbytes_per_sec": 0 00:14:41.278 }, 00:14:41.278 "claimed": false, 00:14:41.278 "zoned": false, 00:14:41.278 "supported_io_types": { 00:14:41.278 "read": true, 00:14:41.278 "write": true, 00:14:41.278 "unmap": true, 00:14:41.278 "flush": true, 00:14:41.278 "reset": true, 00:14:41.278 "nvme_admin": false, 00:14:41.278 "nvme_io": false, 00:14:41.278 "nvme_io_md": false, 00:14:41.278 "write_zeroes": true, 00:14:41.278 "zcopy": true, 00:14:41.278 "get_zone_info": false, 00:14:41.278 "zone_management": false, 00:14:41.278 "zone_append": false, 00:14:41.278 "compare": false, 00:14:41.278 "compare_and_write": false, 00:14:41.278 "abort": true, 00:14:41.278 "seek_hole": false, 00:14:41.278 "seek_data": false, 00:14:41.278 "copy": true, 00:14:41.278 "nvme_iov_md": false 00:14:41.278 }, 00:14:41.278 "memory_domains": [ 00:14:41.278 { 00:14:41.278 "dma_device_id": "system", 00:14:41.278 "dma_device_type": 1 00:14:41.278 }, 00:14:41.278 { 00:14:41.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.278 "dma_device_type": 2 00:14:41.278 } 00:14:41.278 ], 00:14:41.278 "driver_specific": {} 00:14:41.278 } 00:14:41.278 ] 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 BaseBdev4 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 22:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 [ 00:14:41.278 { 00:14:41.278 "name": "BaseBdev4", 00:14:41.278 "aliases": [ 00:14:41.278 "52dd8768-be3c-4491-9272-a96093b8a388" 00:14:41.278 ], 00:14:41.278 "product_name": "Malloc disk", 00:14:41.278 "block_size": 512, 00:14:41.278 "num_blocks": 65536, 00:14:41.278 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:41.278 "assigned_rate_limits": { 00:14:41.278 "rw_ios_per_sec": 0, 00:14:41.278 "rw_mbytes_per_sec": 0, 00:14:41.278 "r_mbytes_per_sec": 0, 00:14:41.278 "w_mbytes_per_sec": 0 00:14:41.278 }, 00:14:41.278 "claimed": false, 00:14:41.278 "zoned": false, 00:14:41.278 "supported_io_types": { 00:14:41.278 "read": true, 00:14:41.278 "write": true, 00:14:41.278 "unmap": true, 00:14:41.278 "flush": true, 00:14:41.278 "reset": true, 00:14:41.278 "nvme_admin": false, 00:14:41.278 "nvme_io": false, 00:14:41.278 "nvme_io_md": false, 00:14:41.278 "write_zeroes": true, 00:14:41.278 "zcopy": true, 00:14:41.278 "get_zone_info": false, 00:14:41.278 "zone_management": false, 00:14:41.278 "zone_append": false, 00:14:41.278 "compare": false, 00:14:41.278 "compare_and_write": false, 00:14:41.278 "abort": true, 00:14:41.278 "seek_hole": false, 00:14:41.278 "seek_data": false, 00:14:41.278 "copy": true, 00:14:41.278 "nvme_iov_md": false 00:14:41.278 }, 00:14:41.278 "memory_domains": [ 00:14:41.278 { 00:14:41.278 "dma_device_id": "system", 00:14:41.278 "dma_device_type": 1 00:14:41.278 }, 00:14:41.278 { 00:14:41.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.278 "dma_device_type": 2 00:14:41.278 } 00:14:41.278 ], 00:14:41.278 "driver_specific": {} 00:14:41.278 } 00:14:41.278 ] 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.278 [2024-09-27 22:31:37.030472] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.278 [2024-09-27 22:31:37.030540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.278 [2024-09-27 22:31:37.030568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.278 [2024-09-27 22:31:37.032925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.278 [2024-09-27 22:31:37.033018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:41.278 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.279 "name": "Existed_Raid", 00:14:41.279 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:41.279 "strip_size_kb": 0, 00:14:41.279 "state": "configuring", 00:14:41.279 "raid_level": "raid1", 00:14:41.279 "superblock": true, 00:14:41.279 "num_base_bdevs": 4, 00:14:41.279 "num_base_bdevs_discovered": 3, 00:14:41.279 "num_base_bdevs_operational": 4, 00:14:41.279 "base_bdevs_list": [ 00:14:41.279 { 00:14:41.279 "name": "BaseBdev1", 00:14:41.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.279 "is_configured": false, 00:14:41.279 "data_offset": 0, 00:14:41.279 "data_size": 0 00:14:41.279 }, 00:14:41.279 { 00:14:41.279 "name": "BaseBdev2", 00:14:41.279 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:41.279 "is_configured": true, 00:14:41.279 "data_offset": 2048, 00:14:41.279 "data_size": 63488 00:14:41.279 }, 00:14:41.279 { 00:14:41.279 "name": "BaseBdev3", 00:14:41.279 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:41.279 "is_configured": true, 00:14:41.279 "data_offset": 2048, 00:14:41.279 "data_size": 63488 00:14:41.279 }, 00:14:41.279 { 00:14:41.279 "name": "BaseBdev4", 00:14:41.279 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:41.279 "is_configured": true, 00:14:41.279 "data_offset": 2048, 00:14:41.279 "data_size": 63488 00:14:41.279 } 00:14:41.279 ] 00:14:41.279 }' 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.279 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 [2024-09-27 22:31:37.477804] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.844 "name": "Existed_Raid", 00:14:41.844 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:41.844 "strip_size_kb": 0, 00:14:41.844 "state": "configuring", 00:14:41.844 "raid_level": "raid1", 00:14:41.844 "superblock": true, 00:14:41.844 "num_base_bdevs": 4, 00:14:41.844 "num_base_bdevs_discovered": 2, 00:14:41.844 "num_base_bdevs_operational": 4, 00:14:41.844 "base_bdevs_list": [ 00:14:41.844 { 00:14:41.844 "name": "BaseBdev1", 00:14:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.844 "is_configured": false, 00:14:41.844 "data_offset": 0, 00:14:41.844 "data_size": 0 00:14:41.844 }, 00:14:41.844 { 00:14:41.844 "name": null, 00:14:41.844 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:41.844 "is_configured": false, 00:14:41.844 "data_offset": 0, 00:14:41.844 "data_size": 63488 00:14:41.844 }, 00:14:41.844 { 00:14:41.844 "name": "BaseBdev3", 00:14:41.844 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:41.844 "is_configured": true, 00:14:41.844 "data_offset": 2048, 00:14:41.844 "data_size": 63488 00:14:41.844 }, 00:14:41.844 { 00:14:41.844 "name": "BaseBdev4", 00:14:41.844 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:41.844 "is_configured": true, 00:14:41.844 "data_offset": 2048, 00:14:41.844 "data_size": 63488 00:14:41.844 } 00:14:41.844 ] 00:14:41.844 }' 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.844 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.101 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.101 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.101 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:42.101 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.359 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:42.359 22:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.359 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.359 22:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 [2024-09-27 22:31:38.040177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.359 BaseBdev1 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.359 [ 00:14:42.359 { 00:14:42.359 "name": "BaseBdev1", 00:14:42.359 "aliases": [ 00:14:42.359 "a1c31015-e870-4f7e-82ac-0870e4fb5fd3" 00:14:42.359 ], 00:14:42.359 "product_name": "Malloc disk", 00:14:42.359 "block_size": 512, 00:14:42.359 "num_blocks": 65536, 00:14:42.359 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:42.359 "assigned_rate_limits": { 00:14:42.359 "rw_ios_per_sec": 0, 00:14:42.359 "rw_mbytes_per_sec": 0, 00:14:42.359 "r_mbytes_per_sec": 0, 00:14:42.359 "w_mbytes_per_sec": 0 00:14:42.359 }, 00:14:42.359 "claimed": true, 00:14:42.359 "claim_type": "exclusive_write", 00:14:42.359 "zoned": false, 00:14:42.359 "supported_io_types": { 00:14:42.359 "read": true, 00:14:42.359 "write": true, 00:14:42.359 "unmap": true, 00:14:42.359 "flush": true, 00:14:42.359 "reset": true, 00:14:42.359 "nvme_admin": false, 00:14:42.359 "nvme_io": false, 00:14:42.359 "nvme_io_md": false, 00:14:42.359 "write_zeroes": true, 00:14:42.359 "zcopy": true, 00:14:42.359 "get_zone_info": false, 00:14:42.359 "zone_management": false, 00:14:42.359 "zone_append": false, 00:14:42.359 "compare": false, 00:14:42.359 "compare_and_write": false, 00:14:42.359 "abort": true, 00:14:42.359 "seek_hole": false, 00:14:42.359 "seek_data": false, 00:14:42.359 "copy": true, 00:14:42.359 "nvme_iov_md": false 00:14:42.359 }, 00:14:42.359 "memory_domains": [ 00:14:42.359 { 00:14:42.359 "dma_device_id": "system", 00:14:42.359 "dma_device_type": 1 00:14:42.359 }, 00:14:42.359 { 00:14:42.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.359 "dma_device_type": 2 00:14:42.359 } 00:14:42.359 ], 00:14:42.359 "driver_specific": {} 00:14:42.359 } 00:14:42.359 ] 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.359 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.360 "name": "Existed_Raid", 00:14:42.360 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:42.360 "strip_size_kb": 0, 00:14:42.360 "state": "configuring", 00:14:42.360 "raid_level": "raid1", 00:14:42.360 "superblock": true, 00:14:42.360 "num_base_bdevs": 4, 00:14:42.360 "num_base_bdevs_discovered": 3, 00:14:42.360 "num_base_bdevs_operational": 4, 00:14:42.360 "base_bdevs_list": [ 00:14:42.360 { 00:14:42.360 "name": "BaseBdev1", 00:14:42.360 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:42.360 "is_configured": true, 00:14:42.360 "data_offset": 2048, 00:14:42.360 "data_size": 63488 00:14:42.360 }, 00:14:42.360 { 00:14:42.360 "name": null, 00:14:42.360 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:42.360 "is_configured": false, 00:14:42.360 "data_offset": 0, 00:14:42.360 "data_size": 63488 00:14:42.360 }, 00:14:42.360 { 00:14:42.360 "name": "BaseBdev3", 00:14:42.360 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:42.360 "is_configured": true, 00:14:42.360 "data_offset": 2048, 00:14:42.360 "data_size": 63488 00:14:42.360 }, 00:14:42.360 { 00:14:42.360 "name": "BaseBdev4", 00:14:42.360 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:42.360 "is_configured": true, 00:14:42.360 "data_offset": 2048, 00:14:42.360 "data_size": 63488 00:14:42.360 } 00:14:42.360 ] 00:14:42.360 }' 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.360 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.927 [2024-09-27 22:31:38.607676] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.927 "name": "Existed_Raid", 00:14:42.927 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:42.927 "strip_size_kb": 0, 00:14:42.927 "state": "configuring", 00:14:42.927 "raid_level": "raid1", 00:14:42.927 "superblock": true, 00:14:42.927 "num_base_bdevs": 4, 00:14:42.927 "num_base_bdevs_discovered": 2, 00:14:42.927 "num_base_bdevs_operational": 4, 00:14:42.927 "base_bdevs_list": [ 00:14:42.927 { 00:14:42.927 "name": "BaseBdev1", 00:14:42.927 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:42.927 "is_configured": true, 00:14:42.927 "data_offset": 2048, 00:14:42.927 "data_size": 63488 00:14:42.927 }, 00:14:42.927 { 00:14:42.927 "name": null, 00:14:42.927 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:42.927 "is_configured": false, 00:14:42.927 "data_offset": 0, 00:14:42.927 "data_size": 63488 00:14:42.927 }, 00:14:42.927 { 00:14:42.927 "name": null, 00:14:42.927 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:42.927 "is_configured": false, 00:14:42.927 "data_offset": 0, 00:14:42.927 "data_size": 63488 00:14:42.927 }, 00:14:42.927 { 00:14:42.927 "name": "BaseBdev4", 00:14:42.927 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:42.927 "is_configured": true, 00:14:42.927 "data_offset": 2048, 00:14:42.927 "data_size": 63488 00:14:42.927 } 00:14:42.927 ] 00:14:42.927 }' 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.927 22:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.185 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.186 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.186 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.186 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:43.186 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.444 [2024-09-27 22:31:39.091697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.444 "name": "Existed_Raid", 00:14:43.444 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:43.444 "strip_size_kb": 0, 00:14:43.444 "state": "configuring", 00:14:43.444 "raid_level": "raid1", 00:14:43.444 "superblock": true, 00:14:43.444 "num_base_bdevs": 4, 00:14:43.444 "num_base_bdevs_discovered": 3, 00:14:43.444 "num_base_bdevs_operational": 4, 00:14:43.444 "base_bdevs_list": [ 00:14:43.444 { 00:14:43.444 "name": "BaseBdev1", 00:14:43.444 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:43.444 "is_configured": true, 00:14:43.444 "data_offset": 2048, 00:14:43.444 "data_size": 63488 00:14:43.444 }, 00:14:43.444 { 00:14:43.444 "name": null, 00:14:43.444 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:43.444 "is_configured": false, 00:14:43.444 "data_offset": 0, 00:14:43.444 "data_size": 63488 00:14:43.444 }, 00:14:43.444 { 00:14:43.444 "name": "BaseBdev3", 00:14:43.444 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:43.444 "is_configured": true, 00:14:43.444 "data_offset": 2048, 00:14:43.444 "data_size": 63488 00:14:43.444 }, 00:14:43.444 { 00:14:43.444 "name": "BaseBdev4", 00:14:43.444 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:43.444 "is_configured": true, 00:14:43.444 "data_offset": 2048, 00:14:43.444 "data_size": 63488 00:14:43.444 } 00:14:43.444 ] 00:14:43.444 }' 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.444 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.702 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.702 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.702 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.702 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:43.702 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.703 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:43.703 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.703 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.703 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.703 [2024-09-27 22:31:39.547710] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.961 "name": "Existed_Raid", 00:14:43.961 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:43.961 "strip_size_kb": 0, 00:14:43.961 "state": "configuring", 00:14:43.961 "raid_level": "raid1", 00:14:43.961 "superblock": true, 00:14:43.961 "num_base_bdevs": 4, 00:14:43.961 "num_base_bdevs_discovered": 2, 00:14:43.961 "num_base_bdevs_operational": 4, 00:14:43.961 "base_bdevs_list": [ 00:14:43.961 { 00:14:43.961 "name": null, 00:14:43.961 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:43.961 "is_configured": false, 00:14:43.961 "data_offset": 0, 00:14:43.961 "data_size": 63488 00:14:43.961 }, 00:14:43.961 { 00:14:43.961 "name": null, 00:14:43.961 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:43.961 "is_configured": false, 00:14:43.961 "data_offset": 0, 00:14:43.961 "data_size": 63488 00:14:43.961 }, 00:14:43.961 { 00:14:43.961 "name": "BaseBdev3", 00:14:43.961 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:43.961 "is_configured": true, 00:14:43.961 "data_offset": 2048, 00:14:43.961 "data_size": 63488 00:14:43.961 }, 00:14:43.961 { 00:14:43.961 "name": "BaseBdev4", 00:14:43.961 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:43.961 "is_configured": true, 00:14:43.961 "data_offset": 2048, 00:14:43.961 "data_size": 63488 00:14:43.961 } 00:14:43.961 ] 00:14:43.961 }' 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.961 22:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.530 [2024-09-27 22:31:40.189792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.530 "name": "Existed_Raid", 00:14:44.530 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:44.530 "strip_size_kb": 0, 00:14:44.530 "state": "configuring", 00:14:44.530 "raid_level": "raid1", 00:14:44.530 "superblock": true, 00:14:44.530 "num_base_bdevs": 4, 00:14:44.530 "num_base_bdevs_discovered": 3, 00:14:44.530 "num_base_bdevs_operational": 4, 00:14:44.530 "base_bdevs_list": [ 00:14:44.530 { 00:14:44.530 "name": null, 00:14:44.530 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:44.530 "is_configured": false, 00:14:44.530 "data_offset": 0, 00:14:44.530 "data_size": 63488 00:14:44.530 }, 00:14:44.530 { 00:14:44.530 "name": "BaseBdev2", 00:14:44.530 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:44.530 "is_configured": true, 00:14:44.530 "data_offset": 2048, 00:14:44.530 "data_size": 63488 00:14:44.530 }, 00:14:44.530 { 00:14:44.530 "name": "BaseBdev3", 00:14:44.530 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:44.530 "is_configured": true, 00:14:44.530 "data_offset": 2048, 00:14:44.530 "data_size": 63488 00:14:44.530 }, 00:14:44.530 { 00:14:44.530 "name": "BaseBdev4", 00:14:44.530 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:44.530 "is_configured": true, 00:14:44.530 "data_offset": 2048, 00:14:44.530 "data_size": 63488 00:14:44.530 } 00:14:44.530 ] 00:14:44.530 }' 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.530 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.789 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.789 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.789 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a1c31015-e870-4f7e-82ac-0870e4fb5fd3 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 [2024-09-27 22:31:40.795328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:45.048 [2024-09-27 22:31:40.795625] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:45.048 [2024-09-27 22:31:40.795645] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.048 [2024-09-27 22:31:40.795939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:45.048 NewBaseBdev 00:14:45.048 [2024-09-27 22:31:40.796143] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:45.048 [2024-09-27 22:31:40.796155] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:45.048 [2024-09-27 22:31:40.796288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 [ 00:14:45.048 { 00:14:45.048 "name": "NewBaseBdev", 00:14:45.048 "aliases": [ 00:14:45.048 "a1c31015-e870-4f7e-82ac-0870e4fb5fd3" 00:14:45.048 ], 00:14:45.048 "product_name": "Malloc disk", 00:14:45.048 "block_size": 512, 00:14:45.048 "num_blocks": 65536, 00:14:45.048 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:45.048 "assigned_rate_limits": { 00:14:45.048 "rw_ios_per_sec": 0, 00:14:45.048 "rw_mbytes_per_sec": 0, 00:14:45.048 "r_mbytes_per_sec": 0, 00:14:45.048 "w_mbytes_per_sec": 0 00:14:45.048 }, 00:14:45.048 "claimed": true, 00:14:45.048 "claim_type": "exclusive_write", 00:14:45.048 "zoned": false, 00:14:45.048 "supported_io_types": { 00:14:45.048 "read": true, 00:14:45.048 "write": true, 00:14:45.048 "unmap": true, 00:14:45.048 "flush": true, 00:14:45.048 "reset": true, 00:14:45.048 "nvme_admin": false, 00:14:45.048 "nvme_io": false, 00:14:45.048 "nvme_io_md": false, 00:14:45.048 "write_zeroes": true, 00:14:45.048 "zcopy": true, 00:14:45.048 "get_zone_info": false, 00:14:45.048 "zone_management": false, 00:14:45.048 "zone_append": false, 00:14:45.048 "compare": false, 00:14:45.048 "compare_and_write": false, 00:14:45.048 "abort": true, 00:14:45.048 "seek_hole": false, 00:14:45.048 "seek_data": false, 00:14:45.048 "copy": true, 00:14:45.048 "nvme_iov_md": false 00:14:45.048 }, 00:14:45.048 "memory_domains": [ 00:14:45.048 { 00:14:45.048 "dma_device_id": "system", 00:14:45.048 "dma_device_type": 1 00:14:45.048 }, 00:14:45.048 { 00:14:45.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.048 "dma_device_type": 2 00:14:45.048 } 00:14:45.048 ], 00:14:45.048 "driver_specific": {} 00:14:45.048 } 00:14:45.048 ] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.048 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.048 "name": "Existed_Raid", 00:14:45.048 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:45.048 "strip_size_kb": 0, 00:14:45.048 "state": "online", 00:14:45.048 "raid_level": "raid1", 00:14:45.048 "superblock": true, 00:14:45.048 "num_base_bdevs": 4, 00:14:45.048 "num_base_bdevs_discovered": 4, 00:14:45.048 "num_base_bdevs_operational": 4, 00:14:45.048 "base_bdevs_list": [ 00:14:45.048 { 00:14:45.048 "name": "NewBaseBdev", 00:14:45.048 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:45.048 "is_configured": true, 00:14:45.048 "data_offset": 2048, 00:14:45.048 "data_size": 63488 00:14:45.048 }, 00:14:45.048 { 00:14:45.048 "name": "BaseBdev2", 00:14:45.048 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:45.048 "is_configured": true, 00:14:45.048 "data_offset": 2048, 00:14:45.048 "data_size": 63488 00:14:45.048 }, 00:14:45.048 { 00:14:45.048 "name": "BaseBdev3", 00:14:45.048 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:45.048 "is_configured": true, 00:14:45.048 "data_offset": 2048, 00:14:45.049 "data_size": 63488 00:14:45.049 }, 00:14:45.049 { 00:14:45.049 "name": "BaseBdev4", 00:14:45.049 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:45.049 "is_configured": true, 00:14:45.049 "data_offset": 2048, 00:14:45.049 "data_size": 63488 00:14:45.049 } 00:14:45.049 ] 00:14:45.049 }' 00:14:45.049 22:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.049 22:31:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.615 [2024-09-27 22:31:41.304539] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.615 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.615 "name": "Existed_Raid", 00:14:45.615 "aliases": [ 00:14:45.615 "51cf71f0-53ac-48a9-aa11-05e408063625" 00:14:45.615 ], 00:14:45.615 "product_name": "Raid Volume", 00:14:45.615 "block_size": 512, 00:14:45.615 "num_blocks": 63488, 00:14:45.615 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:45.615 "assigned_rate_limits": { 00:14:45.615 "rw_ios_per_sec": 0, 00:14:45.615 "rw_mbytes_per_sec": 0, 00:14:45.615 "r_mbytes_per_sec": 0, 00:14:45.615 "w_mbytes_per_sec": 0 00:14:45.615 }, 00:14:45.615 "claimed": false, 00:14:45.615 "zoned": false, 00:14:45.615 "supported_io_types": { 00:14:45.615 "read": true, 00:14:45.615 "write": true, 00:14:45.615 "unmap": false, 00:14:45.615 "flush": false, 00:14:45.615 "reset": true, 00:14:45.615 "nvme_admin": false, 00:14:45.615 "nvme_io": false, 00:14:45.615 "nvme_io_md": false, 00:14:45.615 "write_zeroes": true, 00:14:45.615 "zcopy": false, 00:14:45.615 "get_zone_info": false, 00:14:45.615 "zone_management": false, 00:14:45.615 "zone_append": false, 00:14:45.615 "compare": false, 00:14:45.615 "compare_and_write": false, 00:14:45.615 "abort": false, 00:14:45.615 "seek_hole": false, 00:14:45.615 "seek_data": false, 00:14:45.615 "copy": false, 00:14:45.615 "nvme_iov_md": false 00:14:45.615 }, 00:14:45.615 "memory_domains": [ 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.616 }, 00:14:45.616 { 00:14:45.616 "dma_device_id": "system", 00:14:45.616 "dma_device_type": 1 00:14:45.616 }, 00:14:45.616 { 00:14:45.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.616 "dma_device_type": 2 00:14:45.616 } 00:14:45.616 ], 00:14:45.616 "driver_specific": { 00:14:45.616 "raid": { 00:14:45.616 "uuid": "51cf71f0-53ac-48a9-aa11-05e408063625", 00:14:45.616 "strip_size_kb": 0, 00:14:45.616 "state": "online", 00:14:45.616 "raid_level": "raid1", 00:14:45.616 "superblock": true, 00:14:45.616 "num_base_bdevs": 4, 00:14:45.616 "num_base_bdevs_discovered": 4, 00:14:45.616 "num_base_bdevs_operational": 4, 00:14:45.616 "base_bdevs_list": [ 00:14:45.616 { 00:14:45.616 "name": "NewBaseBdev", 00:14:45.616 "uuid": "a1c31015-e870-4f7e-82ac-0870e4fb5fd3", 00:14:45.616 "is_configured": true, 00:14:45.616 "data_offset": 2048, 00:14:45.616 "data_size": 63488 00:14:45.616 }, 00:14:45.616 { 00:14:45.616 "name": "BaseBdev2", 00:14:45.616 "uuid": "111ec8e0-ed2c-4610-b995-9a9fc8a32f53", 00:14:45.616 "is_configured": true, 00:14:45.616 "data_offset": 2048, 00:14:45.616 "data_size": 63488 00:14:45.616 }, 00:14:45.616 { 00:14:45.616 "name": "BaseBdev3", 00:14:45.616 "uuid": "65c0fa0a-c87f-4d84-95f1-3425d73ca50f", 00:14:45.616 "is_configured": true, 00:14:45.616 "data_offset": 2048, 00:14:45.616 "data_size": 63488 00:14:45.616 }, 00:14:45.616 { 00:14:45.616 "name": "BaseBdev4", 00:14:45.616 "uuid": "52dd8768-be3c-4491-9272-a96093b8a388", 00:14:45.616 "is_configured": true, 00:14:45.616 "data_offset": 2048, 00:14:45.616 "data_size": 63488 00:14:45.616 } 00:14:45.616 ] 00:14:45.616 } 00:14:45.616 } 00:14:45.616 }' 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:45.616 BaseBdev2 00:14:45.616 BaseBdev3 00:14:45.616 BaseBdev4' 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.616 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 [2024-09-27 22:31:41.655750] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.875 [2024-09-27 22:31:41.655801] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.875 [2024-09-27 22:31:41.655893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.875 [2024-09-27 22:31:41.656262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.875 [2024-09-27 22:31:41.656281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74656 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74656 ']' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74656 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74656 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74656' 00:14:45.875 killing process with pid 74656 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74656 00:14:45.875 22:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74656 00:14:45.875 [2024-09-27 22:31:41.710838] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.443 [2024-09-27 22:31:42.149349] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.341 22:31:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:48.341 00:14:48.341 real 0m13.402s 00:14:48.341 user 0m20.602s 00:14:48.341 sys 0m2.506s 00:14:48.341 22:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.341 22:31:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.341 ************************************ 00:14:48.342 END TEST raid_state_function_test_sb 00:14:48.342 ************************************ 00:14:48.600 22:31:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:48.600 22:31:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:48.600 22:31:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.600 22:31:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.600 ************************************ 00:14:48.600 START TEST raid_superblock_test 00:14:48.600 ************************************ 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75343 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75343 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75343 ']' 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.600 22:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.600 [2024-09-27 22:31:44.363887] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:48.600 [2024-09-27 22:31:44.364285] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75343 ] 00:14:48.858 [2024-09-27 22:31:44.552956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.139 [2024-09-27 22:31:44.795449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.433 [2024-09-27 22:31:45.042913] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.433 [2024-09-27 22:31:45.042975] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.691 malloc1 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.691 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 [2024-09-27 22:31:45.571606] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:49.951 [2024-09-27 22:31:45.571898] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.951 [2024-09-27 22:31:45.571967] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:49.951 [2024-09-27 22:31:45.572077] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.951 [2024-09-27 22:31:45.574805] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.951 [2024-09-27 22:31:45.575004] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:49.951 pt1 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 malloc2 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 [2024-09-27 22:31:45.639248] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.951 [2024-09-27 22:31:45.639465] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.951 [2024-09-27 22:31:45.639553] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:49.951 [2024-09-27 22:31:45.639679] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.951 [2024-09-27 22:31:45.642393] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.951 [2024-09-27 22:31:45.642437] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.951 pt2 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 malloc3 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 [2024-09-27 22:31:45.702152] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:49.951 [2024-09-27 22:31:45.702353] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.951 [2024-09-27 22:31:45.702433] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:49.951 [2024-09-27 22:31:45.702559] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.951 [2024-09-27 22:31:45.705250] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.951 [2024-09-27 22:31:45.705408] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:49.951 pt3 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 malloc4 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 [2024-09-27 22:31:45.769494] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:49.951 [2024-09-27 22:31:45.769564] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.951 [2024-09-27 22:31:45.769604] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:49.951 [2024-09-27 22:31:45.769617] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.951 [2024-09-27 22:31:45.772185] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.951 [2024-09-27 22:31:45.772371] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:49.951 pt4 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.951 [2024-09-27 22:31:45.781564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:49.951 [2024-09-27 22:31:45.783894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.951 [2024-09-27 22:31:45.783962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:49.951 [2024-09-27 22:31:45.784027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:49.951 [2024-09-27 22:31:45.784243] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:49.951 [2024-09-27 22:31:45.784256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.951 [2024-09-27 22:31:45.784599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:49.951 [2024-09-27 22:31:45.784814] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:49.951 [2024-09-27 22:31:45.784831] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:49.951 [2024-09-27 22:31:45.785039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.951 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.952 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.209 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.209 "name": "raid_bdev1", 00:14:50.209 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:50.209 "strip_size_kb": 0, 00:14:50.209 "state": "online", 00:14:50.209 "raid_level": "raid1", 00:14:50.209 "superblock": true, 00:14:50.209 "num_base_bdevs": 4, 00:14:50.209 "num_base_bdevs_discovered": 4, 00:14:50.209 "num_base_bdevs_operational": 4, 00:14:50.209 "base_bdevs_list": [ 00:14:50.209 { 00:14:50.209 "name": "pt1", 00:14:50.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.209 "is_configured": true, 00:14:50.209 "data_offset": 2048, 00:14:50.209 "data_size": 63488 00:14:50.209 }, 00:14:50.209 { 00:14:50.209 "name": "pt2", 00:14:50.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.209 "is_configured": true, 00:14:50.209 "data_offset": 2048, 00:14:50.209 "data_size": 63488 00:14:50.209 }, 00:14:50.209 { 00:14:50.209 "name": "pt3", 00:14:50.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.209 "is_configured": true, 00:14:50.209 "data_offset": 2048, 00:14:50.209 "data_size": 63488 00:14:50.209 }, 00:14:50.209 { 00:14:50.209 "name": "pt4", 00:14:50.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:50.209 "is_configured": true, 00:14:50.209 "data_offset": 2048, 00:14:50.209 "data_size": 63488 00:14:50.209 } 00:14:50.209 ] 00:14:50.209 }' 00:14:50.209 22:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.209 22:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.467 [2024-09-27 22:31:46.229251] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.467 "name": "raid_bdev1", 00:14:50.467 "aliases": [ 00:14:50.467 "99f24381-dbc1-4d05-8006-cbdaced55c4f" 00:14:50.467 ], 00:14:50.467 "product_name": "Raid Volume", 00:14:50.467 "block_size": 512, 00:14:50.467 "num_blocks": 63488, 00:14:50.467 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:50.467 "assigned_rate_limits": { 00:14:50.467 "rw_ios_per_sec": 0, 00:14:50.467 "rw_mbytes_per_sec": 0, 00:14:50.467 "r_mbytes_per_sec": 0, 00:14:50.467 "w_mbytes_per_sec": 0 00:14:50.467 }, 00:14:50.467 "claimed": false, 00:14:50.467 "zoned": false, 00:14:50.467 "supported_io_types": { 00:14:50.467 "read": true, 00:14:50.467 "write": true, 00:14:50.467 "unmap": false, 00:14:50.467 "flush": false, 00:14:50.467 "reset": true, 00:14:50.467 "nvme_admin": false, 00:14:50.467 "nvme_io": false, 00:14:50.467 "nvme_io_md": false, 00:14:50.467 "write_zeroes": true, 00:14:50.467 "zcopy": false, 00:14:50.467 "get_zone_info": false, 00:14:50.467 "zone_management": false, 00:14:50.467 "zone_append": false, 00:14:50.467 "compare": false, 00:14:50.467 "compare_and_write": false, 00:14:50.467 "abort": false, 00:14:50.467 "seek_hole": false, 00:14:50.467 "seek_data": false, 00:14:50.467 "copy": false, 00:14:50.467 "nvme_iov_md": false 00:14:50.467 }, 00:14:50.467 "memory_domains": [ 00:14:50.467 { 00:14:50.467 "dma_device_id": "system", 00:14:50.467 "dma_device_type": 1 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.467 "dma_device_type": 2 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "system", 00:14:50.467 "dma_device_type": 1 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.467 "dma_device_type": 2 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "system", 00:14:50.467 "dma_device_type": 1 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.467 "dma_device_type": 2 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "system", 00:14:50.467 "dma_device_type": 1 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.467 "dma_device_type": 2 00:14:50.467 } 00:14:50.467 ], 00:14:50.467 "driver_specific": { 00:14:50.467 "raid": { 00:14:50.467 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:50.467 "strip_size_kb": 0, 00:14:50.467 "state": "online", 00:14:50.467 "raid_level": "raid1", 00:14:50.467 "superblock": true, 00:14:50.467 "num_base_bdevs": 4, 00:14:50.467 "num_base_bdevs_discovered": 4, 00:14:50.467 "num_base_bdevs_operational": 4, 00:14:50.467 "base_bdevs_list": [ 00:14:50.467 { 00:14:50.467 "name": "pt1", 00:14:50.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.467 "is_configured": true, 00:14:50.467 "data_offset": 2048, 00:14:50.467 "data_size": 63488 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "name": "pt2", 00:14:50.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.467 "is_configured": true, 00:14:50.467 "data_offset": 2048, 00:14:50.467 "data_size": 63488 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "name": "pt3", 00:14:50.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.467 "is_configured": true, 00:14:50.467 "data_offset": 2048, 00:14:50.467 "data_size": 63488 00:14:50.467 }, 00:14:50.467 { 00:14:50.467 "name": "pt4", 00:14:50.467 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:50.467 "is_configured": true, 00:14:50.467 "data_offset": 2048, 00:14:50.467 "data_size": 63488 00:14:50.467 } 00:14:50.467 ] 00:14:50.467 } 00:14:50.467 } 00:14:50.467 }' 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.467 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:50.468 pt2 00:14:50.468 pt3 00:14:50.468 pt4' 00:14:50.468 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.726 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.727 [2024-09-27 22:31:46.552762] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=99f24381-dbc1-4d05-8006-cbdaced55c4f 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 99f24381-dbc1-4d05-8006-cbdaced55c4f ']' 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.727 [2024-09-27 22:31:46.596378] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.727 [2024-09-27 22:31:46.596414] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.727 [2024-09-27 22:31:46.596503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.727 [2024-09-27 22:31:46.596599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.727 [2024-09-27 22:31:46.596619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.727 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.986 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.986 [2024-09-27 22:31:46.764160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:50.986 [2024-09-27 22:31:46.766502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:50.986 [2024-09-27 22:31:46.766557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:50.986 [2024-09-27 22:31:46.766593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:50.986 [2024-09-27 22:31:46.766659] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:50.986 [2024-09-27 22:31:46.766717] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:50.987 [2024-09-27 22:31:46.766739] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:50.987 [2024-09-27 22:31:46.766762] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:50.987 [2024-09-27 22:31:46.766778] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.987 [2024-09-27 22:31:46.766791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:50.987 request: 00:14:50.987 { 00:14:50.987 "name": "raid_bdev1", 00:14:50.987 "raid_level": "raid1", 00:14:50.987 "base_bdevs": [ 00:14:50.987 "malloc1", 00:14:50.987 "malloc2", 00:14:50.987 "malloc3", 00:14:50.987 "malloc4" 00:14:50.987 ], 00:14:50.987 "superblock": false, 00:14:50.987 "method": "bdev_raid_create", 00:14:50.987 "req_id": 1 00:14:50.987 } 00:14:50.987 Got JSON-RPC error response 00:14:50.987 response: 00:14:50.987 { 00:14:50.987 "code": -17, 00:14:50.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:50.987 } 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.987 [2024-09-27 22:31:46.832078] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:50.987 [2024-09-27 22:31:46.832301] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.987 [2024-09-27 22:31:46.832330] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:50.987 [2024-09-27 22:31:46.832347] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.987 [2024-09-27 22:31:46.835086] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.987 [2024-09-27 22:31:46.835136] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:50.987 [2024-09-27 22:31:46.835229] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:50.987 [2024-09-27 22:31:46.835296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:50.987 pt1 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.987 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.245 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.245 "name": "raid_bdev1", 00:14:51.245 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:51.245 "strip_size_kb": 0, 00:14:51.245 "state": "configuring", 00:14:51.245 "raid_level": "raid1", 00:14:51.245 "superblock": true, 00:14:51.245 "num_base_bdevs": 4, 00:14:51.245 "num_base_bdevs_discovered": 1, 00:14:51.245 "num_base_bdevs_operational": 4, 00:14:51.245 "base_bdevs_list": [ 00:14:51.245 { 00:14:51.245 "name": "pt1", 00:14:51.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.245 "is_configured": true, 00:14:51.245 "data_offset": 2048, 00:14:51.245 "data_size": 63488 00:14:51.245 }, 00:14:51.245 { 00:14:51.245 "name": null, 00:14:51.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.245 "is_configured": false, 00:14:51.245 "data_offset": 2048, 00:14:51.245 "data_size": 63488 00:14:51.245 }, 00:14:51.245 { 00:14:51.245 "name": null, 00:14:51.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.245 "is_configured": false, 00:14:51.245 "data_offset": 2048, 00:14:51.245 "data_size": 63488 00:14:51.245 }, 00:14:51.245 { 00:14:51.245 "name": null, 00:14:51.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:51.245 "is_configured": false, 00:14:51.246 "data_offset": 2048, 00:14:51.246 "data_size": 63488 00:14:51.246 } 00:14:51.246 ] 00:14:51.246 }' 00:14:51.246 22:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.246 22:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 [2024-09-27 22:31:47.295713] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.504 [2024-09-27 22:31:47.295940] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.504 [2024-09-27 22:31:47.296016] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:51.504 [2024-09-27 22:31:47.296108] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.504 [2024-09-27 22:31:47.296659] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.504 [2024-09-27 22:31:47.296686] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.504 [2024-09-27 22:31:47.296775] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:51.504 [2024-09-27 22:31:47.296809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.504 pt2 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 [2024-09-27 22:31:47.307756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.504 "name": "raid_bdev1", 00:14:51.504 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:51.504 "strip_size_kb": 0, 00:14:51.504 "state": "configuring", 00:14:51.504 "raid_level": "raid1", 00:14:51.504 "superblock": true, 00:14:51.504 "num_base_bdevs": 4, 00:14:51.504 "num_base_bdevs_discovered": 1, 00:14:51.504 "num_base_bdevs_operational": 4, 00:14:51.504 "base_bdevs_list": [ 00:14:51.504 { 00:14:51.504 "name": "pt1", 00:14:51.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.504 "is_configured": true, 00:14:51.504 "data_offset": 2048, 00:14:51.504 "data_size": 63488 00:14:51.504 }, 00:14:51.504 { 00:14:51.504 "name": null, 00:14:51.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.504 "is_configured": false, 00:14:51.504 "data_offset": 0, 00:14:51.504 "data_size": 63488 00:14:51.504 }, 00:14:51.504 { 00:14:51.504 "name": null, 00:14:51.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.504 "is_configured": false, 00:14:51.504 "data_offset": 2048, 00:14:51.504 "data_size": 63488 00:14:51.504 }, 00:14:51.504 { 00:14:51.504 "name": null, 00:14:51.504 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:51.504 "is_configured": false, 00:14:51.504 "data_offset": 2048, 00:14:51.504 "data_size": 63488 00:14:51.504 } 00:14:51.504 ] 00:14:51.504 }' 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.504 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.071 [2024-09-27 22:31:47.771741] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.071 [2024-09-27 22:31:47.771822] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.071 [2024-09-27 22:31:47.771848] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:52.071 [2024-09-27 22:31:47.771860] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.071 [2024-09-27 22:31:47.772355] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.071 [2024-09-27 22:31:47.772376] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.071 [2024-09-27 22:31:47.772469] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:52.071 [2024-09-27 22:31:47.772492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.071 pt2 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.071 [2024-09-27 22:31:47.783714] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.071 [2024-09-27 22:31:47.783789] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.071 [2024-09-27 22:31:47.783830] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:52.071 [2024-09-27 22:31:47.783843] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.071 [2024-09-27 22:31:47.784342] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.071 [2024-09-27 22:31:47.784364] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.071 [2024-09-27 22:31:47.784454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:52.071 [2024-09-27 22:31:47.784484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.071 pt3 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.071 [2024-09-27 22:31:47.795678] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:52.071 [2024-09-27 22:31:47.795756] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.071 [2024-09-27 22:31:47.795797] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:52.071 [2024-09-27 22:31:47.795809] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.071 [2024-09-27 22:31:47.796321] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.071 [2024-09-27 22:31:47.796341] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:52.071 [2024-09-27 22:31:47.796443] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:52.071 [2024-09-27 22:31:47.796466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:52.071 [2024-09-27 22:31:47.796614] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:52.071 [2024-09-27 22:31:47.796625] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:52.071 [2024-09-27 22:31:47.796927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:52.071 [2024-09-27 22:31:47.797128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:52.071 [2024-09-27 22:31:47.797143] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:52.071 [2024-09-27 22:31:47.797285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.071 pt4 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:52.071 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.072 "name": "raid_bdev1", 00:14:52.072 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:52.072 "strip_size_kb": 0, 00:14:52.072 "state": "online", 00:14:52.072 "raid_level": "raid1", 00:14:52.072 "superblock": true, 00:14:52.072 "num_base_bdevs": 4, 00:14:52.072 "num_base_bdevs_discovered": 4, 00:14:52.072 "num_base_bdevs_operational": 4, 00:14:52.072 "base_bdevs_list": [ 00:14:52.072 { 00:14:52.072 "name": "pt1", 00:14:52.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.072 "is_configured": true, 00:14:52.072 "data_offset": 2048, 00:14:52.072 "data_size": 63488 00:14:52.072 }, 00:14:52.072 { 00:14:52.072 "name": "pt2", 00:14:52.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.072 "is_configured": true, 00:14:52.072 "data_offset": 2048, 00:14:52.072 "data_size": 63488 00:14:52.072 }, 00:14:52.072 { 00:14:52.072 "name": "pt3", 00:14:52.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.072 "is_configured": true, 00:14:52.072 "data_offset": 2048, 00:14:52.072 "data_size": 63488 00:14:52.072 }, 00:14:52.072 { 00:14:52.072 "name": "pt4", 00:14:52.072 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:52.072 "is_configured": true, 00:14:52.072 "data_offset": 2048, 00:14:52.072 "data_size": 63488 00:14:52.072 } 00:14:52.072 ] 00:14:52.072 }' 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.072 22:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 [2024-09-27 22:31:48.232050] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.698 "name": "raid_bdev1", 00:14:52.698 "aliases": [ 00:14:52.698 "99f24381-dbc1-4d05-8006-cbdaced55c4f" 00:14:52.698 ], 00:14:52.698 "product_name": "Raid Volume", 00:14:52.698 "block_size": 512, 00:14:52.698 "num_blocks": 63488, 00:14:52.698 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:52.698 "assigned_rate_limits": { 00:14:52.698 "rw_ios_per_sec": 0, 00:14:52.698 "rw_mbytes_per_sec": 0, 00:14:52.698 "r_mbytes_per_sec": 0, 00:14:52.698 "w_mbytes_per_sec": 0 00:14:52.698 }, 00:14:52.698 "claimed": false, 00:14:52.698 "zoned": false, 00:14:52.698 "supported_io_types": { 00:14:52.698 "read": true, 00:14:52.698 "write": true, 00:14:52.698 "unmap": false, 00:14:52.698 "flush": false, 00:14:52.698 "reset": true, 00:14:52.698 "nvme_admin": false, 00:14:52.698 "nvme_io": false, 00:14:52.698 "nvme_io_md": false, 00:14:52.698 "write_zeroes": true, 00:14:52.698 "zcopy": false, 00:14:52.698 "get_zone_info": false, 00:14:52.698 "zone_management": false, 00:14:52.698 "zone_append": false, 00:14:52.698 "compare": false, 00:14:52.698 "compare_and_write": false, 00:14:52.698 "abort": false, 00:14:52.698 "seek_hole": false, 00:14:52.698 "seek_data": false, 00:14:52.698 "copy": false, 00:14:52.698 "nvme_iov_md": false 00:14:52.698 }, 00:14:52.698 "memory_domains": [ 00:14:52.698 { 00:14:52.698 "dma_device_id": "system", 00:14:52.698 "dma_device_type": 1 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.698 "dma_device_type": 2 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "system", 00:14:52.698 "dma_device_type": 1 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.698 "dma_device_type": 2 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "system", 00:14:52.698 "dma_device_type": 1 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.698 "dma_device_type": 2 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "system", 00:14:52.698 "dma_device_type": 1 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.698 "dma_device_type": 2 00:14:52.698 } 00:14:52.698 ], 00:14:52.698 "driver_specific": { 00:14:52.698 "raid": { 00:14:52.698 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:52.698 "strip_size_kb": 0, 00:14:52.698 "state": "online", 00:14:52.698 "raid_level": "raid1", 00:14:52.698 "superblock": true, 00:14:52.698 "num_base_bdevs": 4, 00:14:52.698 "num_base_bdevs_discovered": 4, 00:14:52.698 "num_base_bdevs_operational": 4, 00:14:52.698 "base_bdevs_list": [ 00:14:52.698 { 00:14:52.698 "name": "pt1", 00:14:52.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 2048, 00:14:52.698 "data_size": 63488 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "name": "pt2", 00:14:52.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 2048, 00:14:52.698 "data_size": 63488 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "name": "pt3", 00:14:52.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 2048, 00:14:52.698 "data_size": 63488 00:14:52.698 }, 00:14:52.698 { 00:14:52.698 "name": "pt4", 00:14:52.698 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:52.698 "is_configured": true, 00:14:52.698 "data_offset": 2048, 00:14:52.698 "data_size": 63488 00:14:52.698 } 00:14:52.698 ] 00:14:52.698 } 00:14:52.698 } 00:14:52.698 }' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:52.698 pt2 00:14:52.698 pt3 00:14:52.698 pt4' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.698 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.699 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.699 [2024-09-27 22:31:48.564040] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 99f24381-dbc1-4d05-8006-cbdaced55c4f '!=' 99f24381-dbc1-4d05-8006-cbdaced55c4f ']' 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:52.956 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 [2024-09-27 22:31:48.607746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.957 "name": "raid_bdev1", 00:14:52.957 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:52.957 "strip_size_kb": 0, 00:14:52.957 "state": "online", 00:14:52.957 "raid_level": "raid1", 00:14:52.957 "superblock": true, 00:14:52.957 "num_base_bdevs": 4, 00:14:52.957 "num_base_bdevs_discovered": 3, 00:14:52.957 "num_base_bdevs_operational": 3, 00:14:52.957 "base_bdevs_list": [ 00:14:52.957 { 00:14:52.957 "name": null, 00:14:52.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.957 "is_configured": false, 00:14:52.957 "data_offset": 0, 00:14:52.957 "data_size": 63488 00:14:52.957 }, 00:14:52.957 { 00:14:52.957 "name": "pt2", 00:14:52.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.957 "is_configured": true, 00:14:52.957 "data_offset": 2048, 00:14:52.957 "data_size": 63488 00:14:52.957 }, 00:14:52.957 { 00:14:52.957 "name": "pt3", 00:14:52.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.957 "is_configured": true, 00:14:52.957 "data_offset": 2048, 00:14:52.957 "data_size": 63488 00:14:52.957 }, 00:14:52.957 { 00:14:52.957 "name": "pt4", 00:14:52.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:52.957 "is_configured": true, 00:14:52.957 "data_offset": 2048, 00:14:52.957 "data_size": 63488 00:14:52.957 } 00:14:52.957 ] 00:14:52.957 }' 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.957 22:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.216 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.216 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.216 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.216 [2024-09-27 22:31:49.087676] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.216 [2024-09-27 22:31:49.087716] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.216 [2024-09-27 22:31:49.087802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.216 [2024-09-27 22:31:49.087886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.216 [2024-09-27 22:31:49.087899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:53.216 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 [2024-09-27 22:31:49.183687] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:53.474 [2024-09-27 22:31:49.183760] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.474 [2024-09-27 22:31:49.183785] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:53.474 [2024-09-27 22:31:49.183798] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.474 [2024-09-27 22:31:49.186556] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.474 [2024-09-27 22:31:49.186733] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:53.474 [2024-09-27 22:31:49.186852] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:53.474 [2024-09-27 22:31:49.186903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.474 pt2 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.474 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.474 "name": "raid_bdev1", 00:14:53.474 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:53.474 "strip_size_kb": 0, 00:14:53.474 "state": "configuring", 00:14:53.474 "raid_level": "raid1", 00:14:53.474 "superblock": true, 00:14:53.474 "num_base_bdevs": 4, 00:14:53.474 "num_base_bdevs_discovered": 1, 00:14:53.474 "num_base_bdevs_operational": 3, 00:14:53.474 "base_bdevs_list": [ 00:14:53.474 { 00:14:53.474 "name": null, 00:14:53.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.474 "is_configured": false, 00:14:53.474 "data_offset": 2048, 00:14:53.474 "data_size": 63488 00:14:53.474 }, 00:14:53.474 { 00:14:53.474 "name": "pt2", 00:14:53.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.474 "is_configured": true, 00:14:53.474 "data_offset": 2048, 00:14:53.474 "data_size": 63488 00:14:53.474 }, 00:14:53.474 { 00:14:53.474 "name": null, 00:14:53.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:53.474 "is_configured": false, 00:14:53.474 "data_offset": 2048, 00:14:53.474 "data_size": 63488 00:14:53.474 }, 00:14:53.474 { 00:14:53.474 "name": null, 00:14:53.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:53.474 "is_configured": false, 00:14:53.474 "data_offset": 2048, 00:14:53.474 "data_size": 63488 00:14:53.474 } 00:14:53.474 ] 00:14:53.474 }' 00:14:53.475 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.475 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 [2024-09-27 22:31:49.675713] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:54.040 [2024-09-27 22:31:49.675791] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.040 [2024-09-27 22:31:49.675819] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:54.040 [2024-09-27 22:31:49.675832] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.040 [2024-09-27 22:31:49.676354] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.040 [2024-09-27 22:31:49.676376] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:54.040 [2024-09-27 22:31:49.676468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:54.040 [2024-09-27 22:31:49.676491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:54.040 pt3 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.040 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.040 "name": "raid_bdev1", 00:14:54.040 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:54.040 "strip_size_kb": 0, 00:14:54.040 "state": "configuring", 00:14:54.040 "raid_level": "raid1", 00:14:54.040 "superblock": true, 00:14:54.040 "num_base_bdevs": 4, 00:14:54.040 "num_base_bdevs_discovered": 2, 00:14:54.040 "num_base_bdevs_operational": 3, 00:14:54.040 "base_bdevs_list": [ 00:14:54.040 { 00:14:54.040 "name": null, 00:14:54.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.040 "is_configured": false, 00:14:54.040 "data_offset": 2048, 00:14:54.040 "data_size": 63488 00:14:54.040 }, 00:14:54.040 { 00:14:54.040 "name": "pt2", 00:14:54.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.040 "is_configured": true, 00:14:54.040 "data_offset": 2048, 00:14:54.040 "data_size": 63488 00:14:54.040 }, 00:14:54.040 { 00:14:54.040 "name": "pt3", 00:14:54.040 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.040 "is_configured": true, 00:14:54.040 "data_offset": 2048, 00:14:54.040 "data_size": 63488 00:14:54.040 }, 00:14:54.040 { 00:14:54.040 "name": null, 00:14:54.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:54.040 "is_configured": false, 00:14:54.040 "data_offset": 2048, 00:14:54.040 "data_size": 63488 00:14:54.040 } 00:14:54.040 ] 00:14:54.040 }' 00:14:54.041 22:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.041 22:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.298 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:54.298 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:54.298 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:54.298 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:54.298 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.298 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.298 [2024-09-27 22:31:50.111731] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:54.298 [2024-09-27 22:31:50.111818] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.298 [2024-09-27 22:31:50.111849] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:54.298 [2024-09-27 22:31:50.111863] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.298 [2024-09-27 22:31:50.112382] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.298 [2024-09-27 22:31:50.112406] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:54.298 [2024-09-27 22:31:50.112497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:54.298 [2024-09-27 22:31:50.112529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:54.298 [2024-09-27 22:31:50.112684] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:54.298 [2024-09-27 22:31:50.112695] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.298 [2024-09-27 22:31:50.113182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:54.298 [2024-09-27 22:31:50.113434] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:54.299 [2024-09-27 22:31:50.113483] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:54.299 [2024-09-27 22:31:50.113773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.299 pt4 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.299 "name": "raid_bdev1", 00:14:54.299 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:54.299 "strip_size_kb": 0, 00:14:54.299 "state": "online", 00:14:54.299 "raid_level": "raid1", 00:14:54.299 "superblock": true, 00:14:54.299 "num_base_bdevs": 4, 00:14:54.299 "num_base_bdevs_discovered": 3, 00:14:54.299 "num_base_bdevs_operational": 3, 00:14:54.299 "base_bdevs_list": [ 00:14:54.299 { 00:14:54.299 "name": null, 00:14:54.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.299 "is_configured": false, 00:14:54.299 "data_offset": 2048, 00:14:54.299 "data_size": 63488 00:14:54.299 }, 00:14:54.299 { 00:14:54.299 "name": "pt2", 00:14:54.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.299 "is_configured": true, 00:14:54.299 "data_offset": 2048, 00:14:54.299 "data_size": 63488 00:14:54.299 }, 00:14:54.299 { 00:14:54.299 "name": "pt3", 00:14:54.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.299 "is_configured": true, 00:14:54.299 "data_offset": 2048, 00:14:54.299 "data_size": 63488 00:14:54.299 }, 00:14:54.299 { 00:14:54.299 "name": "pt4", 00:14:54.299 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:54.299 "is_configured": true, 00:14:54.299 "data_offset": 2048, 00:14:54.299 "data_size": 63488 00:14:54.299 } 00:14:54.299 ] 00:14:54.299 }' 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.299 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 [2024-09-27 22:31:50.543729] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.865 [2024-09-27 22:31:50.543777] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.865 [2024-09-27 22:31:50.543877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.865 [2024-09-27 22:31:50.543967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.865 [2024-09-27 22:31:50.543984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 [2024-09-27 22:31:50.615733] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:54.865 [2024-09-27 22:31:50.615831] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.865 [2024-09-27 22:31:50.615854] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:54.865 [2024-09-27 22:31:50.615869] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.865 [2024-09-27 22:31:50.618759] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.865 [2024-09-27 22:31:50.618828] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:54.865 [2024-09-27 22:31:50.618950] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:54.865 [2024-09-27 22:31:50.619034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.865 [2024-09-27 22:31:50.619183] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:54.865 [2024-09-27 22:31:50.619203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.865 [2024-09-27 22:31:50.619225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:54.865 [2024-09-27 22:31:50.619303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.865 [2024-09-27 22:31:50.619421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:54.865 pt1 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.865 "name": "raid_bdev1", 00:14:54.865 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:54.865 "strip_size_kb": 0, 00:14:54.865 "state": "configuring", 00:14:54.865 "raid_level": "raid1", 00:14:54.865 "superblock": true, 00:14:54.865 "num_base_bdevs": 4, 00:14:54.865 "num_base_bdevs_discovered": 2, 00:14:54.865 "num_base_bdevs_operational": 3, 00:14:54.865 "base_bdevs_list": [ 00:14:54.865 { 00:14:54.865 "name": null, 00:14:54.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.865 "is_configured": false, 00:14:54.865 "data_offset": 2048, 00:14:54.865 "data_size": 63488 00:14:54.865 }, 00:14:54.865 { 00:14:54.865 "name": "pt2", 00:14:54.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.865 "is_configured": true, 00:14:54.865 "data_offset": 2048, 00:14:54.865 "data_size": 63488 00:14:54.865 }, 00:14:54.865 { 00:14:54.865 "name": "pt3", 00:14:54.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.865 "is_configured": true, 00:14:54.865 "data_offset": 2048, 00:14:54.865 "data_size": 63488 00:14:54.865 }, 00:14:54.865 { 00:14:54.865 "name": null, 00:14:54.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:54.865 "is_configured": false, 00:14:54.865 "data_offset": 2048, 00:14:54.865 "data_size": 63488 00:14:54.865 } 00:14:54.865 ] 00:14:54.865 }' 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.865 22:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.432 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:55.432 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.432 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:55.432 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.433 [2024-09-27 22:31:51.147737] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:55.433 [2024-09-27 22:31:51.147810] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.433 [2024-09-27 22:31:51.147838] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:55.433 [2024-09-27 22:31:51.147851] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.433 [2024-09-27 22:31:51.148344] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.433 [2024-09-27 22:31:51.148366] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:55.433 [2024-09-27 22:31:51.148455] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:55.433 [2024-09-27 22:31:51.148480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:55.433 [2024-09-27 22:31:51.148609] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:55.433 [2024-09-27 22:31:51.148620] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:55.433 [2024-09-27 22:31:51.148901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:55.433 [2024-09-27 22:31:51.149094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:55.433 [2024-09-27 22:31:51.149110] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:55.433 [2024-09-27 22:31:51.149275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.433 pt4 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.433 "name": "raid_bdev1", 00:14:55.433 "uuid": "99f24381-dbc1-4d05-8006-cbdaced55c4f", 00:14:55.433 "strip_size_kb": 0, 00:14:55.433 "state": "online", 00:14:55.433 "raid_level": "raid1", 00:14:55.433 "superblock": true, 00:14:55.433 "num_base_bdevs": 4, 00:14:55.433 "num_base_bdevs_discovered": 3, 00:14:55.433 "num_base_bdevs_operational": 3, 00:14:55.433 "base_bdevs_list": [ 00:14:55.433 { 00:14:55.433 "name": null, 00:14:55.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.433 "is_configured": false, 00:14:55.433 "data_offset": 2048, 00:14:55.433 "data_size": 63488 00:14:55.433 }, 00:14:55.433 { 00:14:55.433 "name": "pt2", 00:14:55.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.433 "is_configured": true, 00:14:55.433 "data_offset": 2048, 00:14:55.433 "data_size": 63488 00:14:55.433 }, 00:14:55.433 { 00:14:55.433 "name": "pt3", 00:14:55.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.433 "is_configured": true, 00:14:55.433 "data_offset": 2048, 00:14:55.433 "data_size": 63488 00:14:55.433 }, 00:14:55.433 { 00:14:55.433 "name": "pt4", 00:14:55.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:55.433 "is_configured": true, 00:14:55.433 "data_offset": 2048, 00:14:55.433 "data_size": 63488 00:14:55.433 } 00:14:55.433 ] 00:14:55.433 }' 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.433 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:56.031 [2024-09-27 22:31:51.620008] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 99f24381-dbc1-4d05-8006-cbdaced55c4f '!=' 99f24381-dbc1-4d05-8006-cbdaced55c4f ']' 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75343 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75343 ']' 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75343 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75343 00:14:56.031 killing process with pid 75343 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75343' 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75343 00:14:56.031 [2024-09-27 22:31:51.697332] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.031 22:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75343 00:14:56.031 [2024-09-27 22:31:51.697438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.031 [2024-09-27 22:31:51.697545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.031 [2024-09-27 22:31:51.697560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:56.289 [2024-09-27 22:31:52.124753] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.819 22:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:58.819 00:14:58.819 real 0m9.991s 00:14:58.819 user 0m14.928s 00:14:58.819 sys 0m1.918s 00:14:58.819 22:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.819 22:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.819 ************************************ 00:14:58.819 END TEST raid_superblock_test 00:14:58.819 ************************************ 00:14:58.819 22:31:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:58.819 22:31:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:58.819 22:31:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.819 22:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.819 ************************************ 00:14:58.819 START TEST raid_read_error_test 00:14:58.819 ************************************ 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M2dBJRQCUL 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75847 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75847 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75847 ']' 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.819 22:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.819 [2024-09-27 22:31:54.462013] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:14:58.819 [2024-09-27 22:31:54.462161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75847 ] 00:14:58.819 [2024-09-27 22:31:54.637498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.078 [2024-09-27 22:31:54.901313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.337 [2024-09-27 22:31:55.156227] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.337 [2024-09-27 22:31:55.156514] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 BaseBdev1_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 true 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 [2024-09-27 22:31:55.734425] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:00.026 [2024-09-27 22:31:55.734512] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.026 [2024-09-27 22:31:55.734550] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:00.026 [2024-09-27 22:31:55.734583] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.026 [2024-09-27 22:31:55.737351] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.026 [2024-09-27 22:31:55.737420] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:00.026 BaseBdev1 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 BaseBdev2_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 true 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 [2024-09-27 22:31:55.811506] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:00.026 [2024-09-27 22:31:55.811612] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.026 [2024-09-27 22:31:55.811639] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:00.026 [2024-09-27 22:31:55.811655] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.026 [2024-09-27 22:31:55.814430] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.026 [2024-09-27 22:31:55.814492] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:00.026 BaseBdev2 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.026 BaseBdev3_malloc 00:15:00.026 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.027 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:00.027 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.027 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 true 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 [2024-09-27 22:31:55.887501] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:00.285 [2024-09-27 22:31:55.887619] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.285 [2024-09-27 22:31:55.887646] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:00.285 [2024-09-27 22:31:55.887663] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.285 [2024-09-27 22:31:55.890370] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.285 [2024-09-27 22:31:55.890427] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:00.285 BaseBdev3 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 BaseBdev4_malloc 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 true 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 [2024-09-27 22:31:55.964150] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:00.285 [2024-09-27 22:31:55.964437] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.285 [2024-09-27 22:31:55.964474] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:00.285 [2024-09-27 22:31:55.964490] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.285 [2024-09-27 22:31:55.967231] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.285 [2024-09-27 22:31:55.967288] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:00.285 BaseBdev4 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.285 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.285 [2024-09-27 22:31:55.976249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.285 [2024-09-27 22:31:55.978644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.286 [2024-09-27 22:31:55.978880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.286 [2024-09-27 22:31:55.978967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:00.286 [2024-09-27 22:31:55.979259] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:00.286 [2024-09-27 22:31:55.979276] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:00.286 [2024-09-27 22:31:55.979595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:00.286 [2024-09-27 22:31:55.979794] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:00.286 [2024-09-27 22:31:55.979805] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:00.286 [2024-09-27 22:31:55.980018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.286 22:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.286 22:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.286 22:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.286 "name": "raid_bdev1", 00:15:00.286 "uuid": "9d08ace9-9072-40ea-9adc-212a56b486b6", 00:15:00.286 "strip_size_kb": 0, 00:15:00.286 "state": "online", 00:15:00.286 "raid_level": "raid1", 00:15:00.286 "superblock": true, 00:15:00.286 "num_base_bdevs": 4, 00:15:00.286 "num_base_bdevs_discovered": 4, 00:15:00.286 "num_base_bdevs_operational": 4, 00:15:00.286 "base_bdevs_list": [ 00:15:00.286 { 00:15:00.286 "name": "BaseBdev1", 00:15:00.286 "uuid": "541fb0d2-fd92-50f6-9968-642d491a26a2", 00:15:00.286 "is_configured": true, 00:15:00.286 "data_offset": 2048, 00:15:00.286 "data_size": 63488 00:15:00.286 }, 00:15:00.286 { 00:15:00.286 "name": "BaseBdev2", 00:15:00.286 "uuid": "d58b1861-7124-5119-9608-1e858ffc42da", 00:15:00.286 "is_configured": true, 00:15:00.286 "data_offset": 2048, 00:15:00.286 "data_size": 63488 00:15:00.286 }, 00:15:00.286 { 00:15:00.286 "name": "BaseBdev3", 00:15:00.286 "uuid": "cec40d3e-c864-553d-b33d-c5825a88692c", 00:15:00.286 "is_configured": true, 00:15:00.286 "data_offset": 2048, 00:15:00.286 "data_size": 63488 00:15:00.286 }, 00:15:00.286 { 00:15:00.286 "name": "BaseBdev4", 00:15:00.286 "uuid": "7c46e1de-c462-5e6f-b278-2dc66ab32872", 00:15:00.286 "is_configured": true, 00:15:00.286 "data_offset": 2048, 00:15:00.286 "data_size": 63488 00:15:00.286 } 00:15:00.286 ] 00:15:00.286 }' 00:15:00.286 22:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.286 22:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.852 22:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:00.852 22:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:00.852 [2024-09-27 22:31:56.557246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.785 "name": "raid_bdev1", 00:15:01.785 "uuid": "9d08ace9-9072-40ea-9adc-212a56b486b6", 00:15:01.785 "strip_size_kb": 0, 00:15:01.785 "state": "online", 00:15:01.785 "raid_level": "raid1", 00:15:01.785 "superblock": true, 00:15:01.785 "num_base_bdevs": 4, 00:15:01.785 "num_base_bdevs_discovered": 4, 00:15:01.785 "num_base_bdevs_operational": 4, 00:15:01.785 "base_bdevs_list": [ 00:15:01.785 { 00:15:01.785 "name": "BaseBdev1", 00:15:01.785 "uuid": "541fb0d2-fd92-50f6-9968-642d491a26a2", 00:15:01.785 "is_configured": true, 00:15:01.785 "data_offset": 2048, 00:15:01.785 "data_size": 63488 00:15:01.785 }, 00:15:01.785 { 00:15:01.785 "name": "BaseBdev2", 00:15:01.785 "uuid": "d58b1861-7124-5119-9608-1e858ffc42da", 00:15:01.785 "is_configured": true, 00:15:01.785 "data_offset": 2048, 00:15:01.785 "data_size": 63488 00:15:01.785 }, 00:15:01.785 { 00:15:01.785 "name": "BaseBdev3", 00:15:01.785 "uuid": "cec40d3e-c864-553d-b33d-c5825a88692c", 00:15:01.785 "is_configured": true, 00:15:01.785 "data_offset": 2048, 00:15:01.785 "data_size": 63488 00:15:01.785 }, 00:15:01.785 { 00:15:01.785 "name": "BaseBdev4", 00:15:01.785 "uuid": "7c46e1de-c462-5e6f-b278-2dc66ab32872", 00:15:01.785 "is_configured": true, 00:15:01.785 "data_offset": 2048, 00:15:01.785 "data_size": 63488 00:15:01.785 } 00:15:01.785 ] 00:15:01.785 }' 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.785 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.045 [2024-09-27 22:31:57.841762] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.045 [2024-09-27 22:31:57.841801] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.045 [2024-09-27 22:31:57.844575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.045 [2024-09-27 22:31:57.844639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.045 [2024-09-27 22:31:57.844770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.045 [2024-09-27 22:31:57.844785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:02.045 { 00:15:02.045 "results": [ 00:15:02.045 { 00:15:02.045 "job": "raid_bdev1", 00:15:02.045 "core_mask": "0x1", 00:15:02.045 "workload": "randrw", 00:15:02.045 "percentage": 50, 00:15:02.045 "status": "finished", 00:15:02.045 "queue_depth": 1, 00:15:02.045 "io_size": 131072, 00:15:02.045 "runtime": 1.284049, 00:15:02.045 "iops": 9877.348917369976, 00:15:02.045 "mibps": 1234.668614671247, 00:15:02.045 "io_failed": 0, 00:15:02.045 "io_timeout": 0, 00:15:02.045 "avg_latency_us": 98.23064045189669, 00:15:02.045 "min_latency_us": 25.70281124497992, 00:15:02.045 "max_latency_us": 1493.641767068273 00:15:02.045 } 00:15:02.045 ], 00:15:02.045 "core_count": 1 00:15:02.045 } 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75847 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75847 ']' 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75847 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75847 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.045 killing process with pid 75847 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75847' 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75847 00:15:02.045 [2024-09-27 22:31:57.893340] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.045 22:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75847 00:15:02.614 [2024-09-27 22:31:58.220179] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M2dBJRQCUL 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:04.540 ************************************ 00:15:04.540 END TEST raid_read_error_test 00:15:04.540 ************************************ 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:04.540 00:15:04.540 real 0m6.071s 00:15:04.540 user 0m6.773s 00:15:04.540 sys 0m0.794s 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.540 22:32:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.799 22:32:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:04.799 22:32:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:04.799 22:32:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.799 22:32:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.799 ************************************ 00:15:04.799 START TEST raid_write_error_test 00:15:04.799 ************************************ 00:15:04.799 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:15:04.799 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hp5DfiRyg8 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76004 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76004 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76004 ']' 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.800 22:32:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.800 [2024-09-27 22:32:00.605953] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:15:04.800 [2024-09-27 22:32:00.606096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76004 ] 00:15:05.062 [2024-09-27 22:32:00.783330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.322 [2024-09-27 22:32:01.040505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.580 [2024-09-27 22:32:01.294805] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.580 [2024-09-27 22:32:01.294848] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.146 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.146 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 BaseBdev1_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 true 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 [2024-09-27 22:32:01.869354] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:06.147 [2024-09-27 22:32:01.869677] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.147 [2024-09-27 22:32:01.869715] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:06.147 [2024-09-27 22:32:01.869733] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.147 [2024-09-27 22:32:01.872783] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.147 [2024-09-27 22:32:01.873028] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:06.147 BaseBdev1 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 BaseBdev2_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 true 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 [2024-09-27 22:32:01.946901] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:06.147 [2024-09-27 22:32:01.947001] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.147 [2024-09-27 22:32:01.947025] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:06.147 [2024-09-27 22:32:01.947041] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.147 [2024-09-27 22:32:01.949806] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.147 [2024-09-27 22:32:01.949866] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:06.147 BaseBdev2 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 BaseBdev3_malloc 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.147 true 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.147 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 [2024-09-27 22:32:02.023304] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:06.406 [2024-09-27 22:32:02.023388] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.406 [2024-09-27 22:32:02.023413] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:06.406 [2024-09-27 22:32:02.023428] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.406 [2024-09-27 22:32:02.026319] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.406 [2024-09-27 22:32:02.026379] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:06.406 BaseBdev3 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 BaseBdev4_malloc 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 true 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 [2024-09-27 22:32:02.101182] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:06.406 [2024-09-27 22:32:02.101416] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.406 [2024-09-27 22:32:02.101479] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:06.406 [2024-09-27 22:32:02.101571] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.406 [2024-09-27 22:32:02.104420] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.406 [2024-09-27 22:32:02.104601] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:06.406 BaseBdev4 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.406 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 [2024-09-27 22:32:02.113444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.406 [2024-09-27 22:32:02.115980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.406 [2024-09-27 22:32:02.116225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.407 [2024-09-27 22:32:02.116319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:06.407 [2024-09-27 22:32:02.116596] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:06.407 [2024-09-27 22:32:02.116614] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:06.407 [2024-09-27 22:32:02.116957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:06.407 [2024-09-27 22:32:02.117184] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:06.407 [2024-09-27 22:32:02.117197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:06.407 [2024-09-27 22:32:02.117450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.407 "name": "raid_bdev1", 00:15:06.407 "uuid": "f2913ca3-b8f3-4e94-aebc-a9339fc8b6c4", 00:15:06.407 "strip_size_kb": 0, 00:15:06.407 "state": "online", 00:15:06.407 "raid_level": "raid1", 00:15:06.407 "superblock": true, 00:15:06.407 "num_base_bdevs": 4, 00:15:06.407 "num_base_bdevs_discovered": 4, 00:15:06.407 "num_base_bdevs_operational": 4, 00:15:06.407 "base_bdevs_list": [ 00:15:06.407 { 00:15:06.407 "name": "BaseBdev1", 00:15:06.407 "uuid": "7486771a-7691-5709-95a3-78816b9b2714", 00:15:06.407 "is_configured": true, 00:15:06.407 "data_offset": 2048, 00:15:06.407 "data_size": 63488 00:15:06.407 }, 00:15:06.407 { 00:15:06.407 "name": "BaseBdev2", 00:15:06.407 "uuid": "d2f1d607-490d-56cf-a6dd-13335694f86c", 00:15:06.407 "is_configured": true, 00:15:06.407 "data_offset": 2048, 00:15:06.407 "data_size": 63488 00:15:06.407 }, 00:15:06.407 { 00:15:06.407 "name": "BaseBdev3", 00:15:06.407 "uuid": "d64141cc-ced1-5471-abfd-61fe60190a97", 00:15:06.407 "is_configured": true, 00:15:06.407 "data_offset": 2048, 00:15:06.407 "data_size": 63488 00:15:06.407 }, 00:15:06.407 { 00:15:06.407 "name": "BaseBdev4", 00:15:06.407 "uuid": "9269835e-8d59-5cb1-8d53-cc4383e29d92", 00:15:06.407 "is_configured": true, 00:15:06.407 "data_offset": 2048, 00:15:06.407 "data_size": 63488 00:15:06.407 } 00:15:06.407 ] 00:15:06.407 }' 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.407 22:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.974 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:06.974 22:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:06.974 [2024-09-27 22:32:02.654369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 [2024-09-27 22:32:03.558841] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:07.947 [2024-09-27 22:32:03.558916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.947 [2024-09-27 22:32:03.559162] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.947 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.948 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.948 "name": "raid_bdev1", 00:15:07.948 "uuid": "f2913ca3-b8f3-4e94-aebc-a9339fc8b6c4", 00:15:07.948 "strip_size_kb": 0, 00:15:07.948 "state": "online", 00:15:07.948 "raid_level": "raid1", 00:15:07.948 "superblock": true, 00:15:07.948 "num_base_bdevs": 4, 00:15:07.948 "num_base_bdevs_discovered": 3, 00:15:07.948 "num_base_bdevs_operational": 3, 00:15:07.948 "base_bdevs_list": [ 00:15:07.948 { 00:15:07.948 "name": null, 00:15:07.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.948 "is_configured": false, 00:15:07.948 "data_offset": 0, 00:15:07.948 "data_size": 63488 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "name": "BaseBdev2", 00:15:07.948 "uuid": "d2f1d607-490d-56cf-a6dd-13335694f86c", 00:15:07.948 "is_configured": true, 00:15:07.948 "data_offset": 2048, 00:15:07.948 "data_size": 63488 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "name": "BaseBdev3", 00:15:07.948 "uuid": "d64141cc-ced1-5471-abfd-61fe60190a97", 00:15:07.948 "is_configured": true, 00:15:07.948 "data_offset": 2048, 00:15:07.948 "data_size": 63488 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "name": "BaseBdev4", 00:15:07.948 "uuid": "9269835e-8d59-5cb1-8d53-cc4383e29d92", 00:15:07.948 "is_configured": true, 00:15:07.948 "data_offset": 2048, 00:15:07.948 "data_size": 63488 00:15:07.948 } 00:15:07.948 ] 00:15:07.948 }' 00:15:07.948 22:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.948 22:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.207 [2024-09-27 22:32:04.023446] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.207 [2024-09-27 22:32:04.023688] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.207 [2024-09-27 22:32:04.026586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.207 [2024-09-27 22:32:04.026639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.207 [2024-09-27 22:32:04.026750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.207 [2024-09-27 22:32:04.026763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:08.207 { 00:15:08.207 "results": [ 00:15:08.207 { 00:15:08.207 "job": "raid_bdev1", 00:15:08.207 "core_mask": "0x1", 00:15:08.207 "workload": "randrw", 00:15:08.207 "percentage": 50, 00:15:08.207 "status": "finished", 00:15:08.207 "queue_depth": 1, 00:15:08.207 "io_size": 131072, 00:15:08.207 "runtime": 1.368893, 00:15:08.207 "iops": 10740.065147531619, 00:15:08.207 "mibps": 1342.5081434414524, 00:15:08.207 "io_failed": 0, 00:15:08.207 "io_timeout": 0, 00:15:08.207 "avg_latency_us": 90.04108415706084, 00:15:08.207 "min_latency_us": 25.6, 00:15:08.207 "max_latency_us": 1566.0208835341366 00:15:08.207 } 00:15:08.207 ], 00:15:08.207 "core_count": 1 00:15:08.207 } 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76004 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76004 ']' 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76004 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76004 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.207 killing process with pid 76004 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76004' 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76004 00:15:08.207 [2024-09-27 22:32:04.075742] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.207 22:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76004 00:15:08.774 [2024-09-27 22:32:04.433976] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hp5DfiRyg8 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:11.351 00:15:11.351 real 0m6.162s 00:15:11.351 user 0m6.933s 00:15:11.351 sys 0m0.766s 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.351 ************************************ 00:15:11.351 END TEST raid_write_error_test 00:15:11.351 ************************************ 00:15:11.351 22:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.351 22:32:06 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:11.351 22:32:06 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:11.351 22:32:06 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:11.351 22:32:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:11.351 22:32:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.351 22:32:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.351 ************************************ 00:15:11.351 START TEST raid_rebuild_test 00:15:11.351 ************************************ 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=76164 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 76164 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 76164 ']' 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.351 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.352 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.352 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.352 22:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.352 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:11.352 Zero copy mechanism will not be used. 00:15:11.352 [2024-09-27 22:32:06.837040] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:15:11.352 [2024-09-27 22:32:06.837180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76164 ] 00:15:11.352 [2024-09-27 22:32:07.012461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.609 [2024-09-27 22:32:07.263458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.867 [2024-09-27 22:32:07.521893] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.867 [2024-09-27 22:32:07.521941] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 BaseBdev1_malloc 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 [2024-09-27 22:32:08.087163] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:12.433 [2024-09-27 22:32:08.087270] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.433 [2024-09-27 22:32:08.087299] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:12.433 [2024-09-27 22:32:08.087332] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.433 [2024-09-27 22:32:08.090056] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.433 [2024-09-27 22:32:08.090105] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:12.433 BaseBdev1 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 BaseBdev2_malloc 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 [2024-09-27 22:32:08.154747] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:12.433 [2024-09-27 22:32:08.154850] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.433 [2024-09-27 22:32:08.154896] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:12.433 [2024-09-27 22:32:08.154913] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.433 [2024-09-27 22:32:08.157566] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.433 [2024-09-27 22:32:08.157621] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:12.433 BaseBdev2 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 spare_malloc 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 spare_delay 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 [2024-09-27 22:32:08.232950] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.433 [2024-09-27 22:32:08.233070] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.433 [2024-09-27 22:32:08.233098] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:12.433 [2024-09-27 22:32:08.233114] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.433 [2024-09-27 22:32:08.235917] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.433 [2024-09-27 22:32:08.235989] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.433 spare 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 [2024-09-27 22:32:08.245044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.433 [2024-09-27 22:32:08.247469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.433 [2024-09-27 22:32:08.247628] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:12.433 [2024-09-27 22:32:08.247646] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:12.433 [2024-09-27 22:32:08.248019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.433 [2024-09-27 22:32:08.248224] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:12.433 [2024-09-27 22:32:08.248241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:12.433 [2024-09-27 22:32:08.248428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.433 "name": "raid_bdev1", 00:15:12.433 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:12.433 "strip_size_kb": 0, 00:15:12.433 "state": "online", 00:15:12.433 "raid_level": "raid1", 00:15:12.433 "superblock": false, 00:15:12.433 "num_base_bdevs": 2, 00:15:12.433 "num_base_bdevs_discovered": 2, 00:15:12.433 "num_base_bdevs_operational": 2, 00:15:12.433 "base_bdevs_list": [ 00:15:12.433 { 00:15:12.433 "name": "BaseBdev1", 00:15:12.433 "uuid": "1c66865c-abb0-54e4-9ebc-f2b25d68d2b1", 00:15:12.433 "is_configured": true, 00:15:12.433 "data_offset": 0, 00:15:12.433 "data_size": 65536 00:15:12.433 }, 00:15:12.433 { 00:15:12.433 "name": "BaseBdev2", 00:15:12.433 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:12.433 "is_configured": true, 00:15:12.433 "data_offset": 0, 00:15:12.433 "data_size": 65536 00:15:12.433 } 00:15:12.433 ] 00:15:12.433 }' 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.433 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.000 [2024-09-27 22:32:08.680611] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.000 22:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:13.259 [2024-09-27 22:32:08.995967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:13.259 /dev/nbd0 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.259 1+0 records in 00:15:13.259 1+0 records out 00:15:13.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489577 s, 8.4 MB/s 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:13.259 22:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:18.548 65536+0 records in 00:15:18.548 65536+0 records out 00:15:18.548 33554432 bytes (34 MB, 32 MiB) copied, 4.63703 s, 7.2 MB/s 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.548 [2024-09-27 22:32:13.944328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 [2024-09-27 22:32:13.956432] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.548 22:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.548 22:32:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.548 "name": "raid_bdev1", 00:15:18.548 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:18.548 "strip_size_kb": 0, 00:15:18.548 "state": "online", 00:15:18.548 "raid_level": "raid1", 00:15:18.548 "superblock": false, 00:15:18.548 "num_base_bdevs": 2, 00:15:18.548 "num_base_bdevs_discovered": 1, 00:15:18.548 "num_base_bdevs_operational": 1, 00:15:18.548 "base_bdevs_list": [ 00:15:18.548 { 00:15:18.548 "name": null, 00:15:18.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.548 "is_configured": false, 00:15:18.548 "data_offset": 0, 00:15:18.548 "data_size": 65536 00:15:18.548 }, 00:15:18.548 { 00:15:18.548 "name": "BaseBdev2", 00:15:18.548 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:18.548 "is_configured": true, 00:15:18.548 "data_offset": 0, 00:15:18.548 "data_size": 65536 00:15:18.548 } 00:15:18.548 ] 00:15:18.548 }' 00:15:18.548 22:32:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.549 22:32:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.549 22:32:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.549 22:32:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.549 22:32:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.549 [2024-09-27 22:32:14.391910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.549 [2024-09-27 22:32:14.411677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:18.549 22:32:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.549 22:32:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:18.549 [2024-09-27 22:32:14.413990] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.924 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.925 "name": "raid_bdev1", 00:15:19.925 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:19.925 "strip_size_kb": 0, 00:15:19.925 "state": "online", 00:15:19.925 "raid_level": "raid1", 00:15:19.925 "superblock": false, 00:15:19.925 "num_base_bdevs": 2, 00:15:19.925 "num_base_bdevs_discovered": 2, 00:15:19.925 "num_base_bdevs_operational": 2, 00:15:19.925 "process": { 00:15:19.925 "type": "rebuild", 00:15:19.925 "target": "spare", 00:15:19.925 "progress": { 00:15:19.925 "blocks": 20480, 00:15:19.925 "percent": 31 00:15:19.925 } 00:15:19.925 }, 00:15:19.925 "base_bdevs_list": [ 00:15:19.925 { 00:15:19.925 "name": "spare", 00:15:19.925 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:19.925 "is_configured": true, 00:15:19.925 "data_offset": 0, 00:15:19.925 "data_size": 65536 00:15:19.925 }, 00:15:19.925 { 00:15:19.925 "name": "BaseBdev2", 00:15:19.925 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:19.925 "is_configured": true, 00:15:19.925 "data_offset": 0, 00:15:19.925 "data_size": 65536 00:15:19.925 } 00:15:19.925 ] 00:15:19.925 }' 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.925 [2024-09-27 22:32:15.561196] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.925 [2024-09-27 22:32:15.619938] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.925 [2024-09-27 22:32:15.620042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.925 [2024-09-27 22:32:15.620060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.925 [2024-09-27 22:32:15.620073] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.925 "name": "raid_bdev1", 00:15:19.925 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:19.925 "strip_size_kb": 0, 00:15:19.925 "state": "online", 00:15:19.925 "raid_level": "raid1", 00:15:19.925 "superblock": false, 00:15:19.925 "num_base_bdevs": 2, 00:15:19.925 "num_base_bdevs_discovered": 1, 00:15:19.925 "num_base_bdevs_operational": 1, 00:15:19.925 "base_bdevs_list": [ 00:15:19.925 { 00:15:19.925 "name": null, 00:15:19.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.925 "is_configured": false, 00:15:19.925 "data_offset": 0, 00:15:19.925 "data_size": 65536 00:15:19.925 }, 00:15:19.925 { 00:15:19.925 "name": "BaseBdev2", 00:15:19.925 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:19.925 "is_configured": true, 00:15:19.925 "data_offset": 0, 00:15:19.925 "data_size": 65536 00:15:19.925 } 00:15:19.925 ] 00:15:19.925 }' 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.925 22:32:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.490 "name": "raid_bdev1", 00:15:20.490 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:20.490 "strip_size_kb": 0, 00:15:20.490 "state": "online", 00:15:20.490 "raid_level": "raid1", 00:15:20.490 "superblock": false, 00:15:20.490 "num_base_bdevs": 2, 00:15:20.490 "num_base_bdevs_discovered": 1, 00:15:20.490 "num_base_bdevs_operational": 1, 00:15:20.490 "base_bdevs_list": [ 00:15:20.490 { 00:15:20.490 "name": null, 00:15:20.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.490 "is_configured": false, 00:15:20.490 "data_offset": 0, 00:15:20.490 "data_size": 65536 00:15:20.490 }, 00:15:20.490 { 00:15:20.490 "name": "BaseBdev2", 00:15:20.490 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:20.490 "is_configured": true, 00:15:20.490 "data_offset": 0, 00:15:20.490 "data_size": 65536 00:15:20.490 } 00:15:20.490 ] 00:15:20.490 }' 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.490 [2024-09-27 22:32:16.210321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.490 [2024-09-27 22:32:16.228929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.490 22:32:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:20.490 [2024-09-27 22:32:16.231272] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.433 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.433 "name": "raid_bdev1", 00:15:21.433 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:21.433 "strip_size_kb": 0, 00:15:21.433 "state": "online", 00:15:21.433 "raid_level": "raid1", 00:15:21.433 "superblock": false, 00:15:21.433 "num_base_bdevs": 2, 00:15:21.433 "num_base_bdevs_discovered": 2, 00:15:21.433 "num_base_bdevs_operational": 2, 00:15:21.433 "process": { 00:15:21.433 "type": "rebuild", 00:15:21.433 "target": "spare", 00:15:21.433 "progress": { 00:15:21.433 "blocks": 20480, 00:15:21.433 "percent": 31 00:15:21.433 } 00:15:21.433 }, 00:15:21.433 "base_bdevs_list": [ 00:15:21.433 { 00:15:21.433 "name": "spare", 00:15:21.433 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:21.434 "is_configured": true, 00:15:21.434 "data_offset": 0, 00:15:21.434 "data_size": 65536 00:15:21.434 }, 00:15:21.434 { 00:15:21.434 "name": "BaseBdev2", 00:15:21.434 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:21.434 "is_configured": true, 00:15:21.434 "data_offset": 0, 00:15:21.434 "data_size": 65536 00:15:21.434 } 00:15:21.434 ] 00:15:21.434 }' 00:15:21.434 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.691 "name": "raid_bdev1", 00:15:21.691 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:21.691 "strip_size_kb": 0, 00:15:21.691 "state": "online", 00:15:21.691 "raid_level": "raid1", 00:15:21.691 "superblock": false, 00:15:21.691 "num_base_bdevs": 2, 00:15:21.691 "num_base_bdevs_discovered": 2, 00:15:21.691 "num_base_bdevs_operational": 2, 00:15:21.691 "process": { 00:15:21.691 "type": "rebuild", 00:15:21.691 "target": "spare", 00:15:21.691 "progress": { 00:15:21.691 "blocks": 22528, 00:15:21.691 "percent": 34 00:15:21.691 } 00:15:21.691 }, 00:15:21.691 "base_bdevs_list": [ 00:15:21.691 { 00:15:21.691 "name": "spare", 00:15:21.691 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:21.691 "is_configured": true, 00:15:21.691 "data_offset": 0, 00:15:21.691 "data_size": 65536 00:15:21.691 }, 00:15:21.691 { 00:15:21.691 "name": "BaseBdev2", 00:15:21.691 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:21.691 "is_configured": true, 00:15:21.691 "data_offset": 0, 00:15:21.691 "data_size": 65536 00:15:21.691 } 00:15:21.691 ] 00:15:21.691 }' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.691 22:32:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.067 "name": "raid_bdev1", 00:15:23.067 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:23.067 "strip_size_kb": 0, 00:15:23.067 "state": "online", 00:15:23.067 "raid_level": "raid1", 00:15:23.067 "superblock": false, 00:15:23.067 "num_base_bdevs": 2, 00:15:23.067 "num_base_bdevs_discovered": 2, 00:15:23.067 "num_base_bdevs_operational": 2, 00:15:23.067 "process": { 00:15:23.067 "type": "rebuild", 00:15:23.067 "target": "spare", 00:15:23.067 "progress": { 00:15:23.067 "blocks": 47104, 00:15:23.067 "percent": 71 00:15:23.067 } 00:15:23.067 }, 00:15:23.067 "base_bdevs_list": [ 00:15:23.067 { 00:15:23.067 "name": "spare", 00:15:23.067 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 0, 00:15:23.067 "data_size": 65536 00:15:23.067 }, 00:15:23.067 { 00:15:23.067 "name": "BaseBdev2", 00:15:23.067 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:23.067 "is_configured": true, 00:15:23.067 "data_offset": 0, 00:15:23.067 "data_size": 65536 00:15:23.067 } 00:15:23.067 ] 00:15:23.067 }' 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.067 22:32:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.633 [2024-09-27 22:32:19.446782] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:23.633 [2024-09-27 22:32:19.446892] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:23.633 [2024-09-27 22:32:19.446984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.892 "name": "raid_bdev1", 00:15:23.892 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:23.892 "strip_size_kb": 0, 00:15:23.892 "state": "online", 00:15:23.892 "raid_level": "raid1", 00:15:23.892 "superblock": false, 00:15:23.892 "num_base_bdevs": 2, 00:15:23.892 "num_base_bdevs_discovered": 2, 00:15:23.892 "num_base_bdevs_operational": 2, 00:15:23.892 "base_bdevs_list": [ 00:15:23.892 { 00:15:23.892 "name": "spare", 00:15:23.892 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:23.892 "is_configured": true, 00:15:23.892 "data_offset": 0, 00:15:23.892 "data_size": 65536 00:15:23.892 }, 00:15:23.892 { 00:15:23.892 "name": "BaseBdev2", 00:15:23.892 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:23.892 "is_configured": true, 00:15:23.892 "data_offset": 0, 00:15:23.892 "data_size": 65536 00:15:23.892 } 00:15:23.892 ] 00:15:23.892 }' 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:23.892 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.150 "name": "raid_bdev1", 00:15:24.150 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:24.150 "strip_size_kb": 0, 00:15:24.150 "state": "online", 00:15:24.150 "raid_level": "raid1", 00:15:24.150 "superblock": false, 00:15:24.150 "num_base_bdevs": 2, 00:15:24.150 "num_base_bdevs_discovered": 2, 00:15:24.150 "num_base_bdevs_operational": 2, 00:15:24.150 "base_bdevs_list": [ 00:15:24.150 { 00:15:24.150 "name": "spare", 00:15:24.150 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:24.150 "is_configured": true, 00:15:24.150 "data_offset": 0, 00:15:24.150 "data_size": 65536 00:15:24.150 }, 00:15:24.150 { 00:15:24.150 "name": "BaseBdev2", 00:15:24.150 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:24.150 "is_configured": true, 00:15:24.150 "data_offset": 0, 00:15:24.150 "data_size": 65536 00:15:24.150 } 00:15:24.150 ] 00:15:24.150 }' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.150 "name": "raid_bdev1", 00:15:24.150 "uuid": "7549d138-0e1b-4e70-ba91-e6de47049ab9", 00:15:24.150 "strip_size_kb": 0, 00:15:24.150 "state": "online", 00:15:24.150 "raid_level": "raid1", 00:15:24.150 "superblock": false, 00:15:24.150 "num_base_bdevs": 2, 00:15:24.150 "num_base_bdevs_discovered": 2, 00:15:24.150 "num_base_bdevs_operational": 2, 00:15:24.150 "base_bdevs_list": [ 00:15:24.150 { 00:15:24.150 "name": "spare", 00:15:24.150 "uuid": "a575c861-f462-55b9-a8c9-48651e6dbaaa", 00:15:24.150 "is_configured": true, 00:15:24.150 "data_offset": 0, 00:15:24.150 "data_size": 65536 00:15:24.150 }, 00:15:24.150 { 00:15:24.150 "name": "BaseBdev2", 00:15:24.150 "uuid": "fecb4ae2-bef8-540a-a6c1-4c3a7c75421a", 00:15:24.150 "is_configured": true, 00:15:24.150 "data_offset": 0, 00:15:24.150 "data_size": 65536 00:15:24.150 } 00:15:24.150 ] 00:15:24.150 }' 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.150 22:32:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.714 [2024-09-27 22:32:20.372246] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.714 [2024-09-27 22:32:20.372306] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.714 [2024-09-27 22:32:20.372436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.714 [2024-09-27 22:32:20.372541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.714 [2024-09-27 22:32:20.372557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.714 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:24.972 /dev/nbd0 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.972 1+0 records in 00:15:24.972 1+0 records out 00:15:24.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317831 s, 12.9 MB/s 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.972 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:25.230 /dev/nbd1 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.230 1+0 records in 00:15:25.230 1+0 records out 00:15:25.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445724 s, 9.2 MB/s 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.230 22:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.487 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.745 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 76164 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 76164 ']' 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 76164 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76164 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.004 killing process with pid 76164 00:15:26.004 Received shutdown signal, test time was about 60.000000 seconds 00:15:26.004 00:15:26.004 Latency(us) 00:15:26.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.004 =================================================================================================================== 00:15:26.004 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76164' 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 76164 00:15:26.004 [2024-09-27 22:32:21.687816] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.004 22:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 76164 00:15:26.262 [2024-09-27 22:32:22.019469] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.814 ************************************ 00:15:28.814 END TEST raid_rebuild_test 00:15:28.814 ************************************ 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:28.814 00:15:28.814 real 0m17.409s 00:15:28.814 user 0m19.256s 00:15:28.814 sys 0m3.633s 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.814 22:32:24 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:28.814 22:32:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:28.814 22:32:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.814 22:32:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.814 ************************************ 00:15:28.814 START TEST raid_rebuild_test_sb 00:15:28.814 ************************************ 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76601 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76601 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76601 ']' 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.814 22:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.814 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:28.814 Zero copy mechanism will not be used. 00:15:28.814 [2024-09-27 22:32:24.344642] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:15:28.814 [2024-09-27 22:32:24.344799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76601 ] 00:15:28.814 [2024-09-27 22:32:24.525141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.073 [2024-09-27 22:32:24.783882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.334 [2024-09-27 22:32:25.035719] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.334 [2024-09-27 22:32:25.035764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 BaseBdev1_malloc 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 [2024-09-27 22:32:25.599740] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:29.903 [2024-09-27 22:32:25.599850] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.903 [2024-09-27 22:32:25.599880] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:29.903 [2024-09-27 22:32:25.599900] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.903 [2024-09-27 22:32:25.602681] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.903 [2024-09-27 22:32:25.602741] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:29.903 BaseBdev1 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 BaseBdev2_malloc 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 [2024-09-27 22:32:25.664364] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:29.903 [2024-09-27 22:32:25.665061] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.903 [2024-09-27 22:32:25.665108] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:29.903 [2024-09-27 22:32:25.665127] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.903 [2024-09-27 22:32:25.668523] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.903 [2024-09-27 22:32:25.668698] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.903 BaseBdev2 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 spare_malloc 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 spare_delay 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:29.903 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 [2024-09-27 22:32:25.742313] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:29.904 [2024-09-27 22:32:25.742400] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.904 [2024-09-27 22:32:25.742429] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:29.904 [2024-09-27 22:32:25.742445] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.904 [2024-09-27 22:32:25.745114] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.904 [2024-09-27 22:32:25.745166] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:29.904 spare 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 [2024-09-27 22:32:25.754326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.904 [2024-09-27 22:32:25.756770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.904 [2024-09-27 22:32:25.757139] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:29.904 [2024-09-27 22:32:25.757162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:29.904 [2024-09-27 22:32:25.757474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:29.904 [2024-09-27 22:32:25.757646] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:29.904 [2024-09-27 22:32:25.757656] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:29.904 [2024-09-27 22:32:25.757836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.904 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.164 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.164 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.164 "name": "raid_bdev1", 00:15:30.164 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:30.164 "strip_size_kb": 0, 00:15:30.164 "state": "online", 00:15:30.164 "raid_level": "raid1", 00:15:30.164 "superblock": true, 00:15:30.164 "num_base_bdevs": 2, 00:15:30.164 "num_base_bdevs_discovered": 2, 00:15:30.164 "num_base_bdevs_operational": 2, 00:15:30.164 "base_bdevs_list": [ 00:15:30.164 { 00:15:30.164 "name": "BaseBdev1", 00:15:30.164 "uuid": "b449c565-d0e9-51ba-8ca2-66dd794a125a", 00:15:30.164 "is_configured": true, 00:15:30.164 "data_offset": 2048, 00:15:30.164 "data_size": 63488 00:15:30.164 }, 00:15:30.164 { 00:15:30.164 "name": "BaseBdev2", 00:15:30.164 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:30.164 "is_configured": true, 00:15:30.164 "data_offset": 2048, 00:15:30.164 "data_size": 63488 00:15:30.164 } 00:15:30.164 ] 00:15:30.164 }' 00:15:30.164 22:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.164 22:32:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.423 [2024-09-27 22:32:26.166128] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:30.423 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:30.424 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:30.682 [2024-09-27 22:32:26.465460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:30.682 /dev/nbd0 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.682 1+0 records in 00:15:30.682 1+0 records out 00:15:30.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349914 s, 11.7 MB/s 00:15:30.682 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:30.683 22:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:35.981 63488+0 records in 00:15:35.981 63488+0 records out 00:15:35.981 32505856 bytes (33 MB, 31 MiB) copied, 5.226 s, 6.2 MB/s 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.981 22:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:36.260 [2024-09-27 22:32:31.989866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.260 [2024-09-27 22:32:32.025919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.260 "name": "raid_bdev1", 00:15:36.260 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:36.260 "strip_size_kb": 0, 00:15:36.260 "state": "online", 00:15:36.260 "raid_level": "raid1", 00:15:36.260 "superblock": true, 00:15:36.260 "num_base_bdevs": 2, 00:15:36.260 "num_base_bdevs_discovered": 1, 00:15:36.260 "num_base_bdevs_operational": 1, 00:15:36.260 "base_bdevs_list": [ 00:15:36.260 { 00:15:36.260 "name": null, 00:15:36.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.260 "is_configured": false, 00:15:36.260 "data_offset": 0, 00:15:36.260 "data_size": 63488 00:15:36.260 }, 00:15:36.260 { 00:15:36.260 "name": "BaseBdev2", 00:15:36.260 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:36.260 "is_configured": true, 00:15:36.260 "data_offset": 2048, 00:15:36.260 "data_size": 63488 00:15:36.260 } 00:15:36.260 ] 00:15:36.260 }' 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.260 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.828 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.828 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 [2024-09-27 22:32:32.477348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.828 [2024-09-27 22:32:32.498370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:36.828 22:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.828 22:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:36.828 [2024-09-27 22:32:32.500742] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.765 "name": "raid_bdev1", 00:15:37.765 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:37.765 "strip_size_kb": 0, 00:15:37.765 "state": "online", 00:15:37.765 "raid_level": "raid1", 00:15:37.765 "superblock": true, 00:15:37.765 "num_base_bdevs": 2, 00:15:37.765 "num_base_bdevs_discovered": 2, 00:15:37.765 "num_base_bdevs_operational": 2, 00:15:37.765 "process": { 00:15:37.765 "type": "rebuild", 00:15:37.765 "target": "spare", 00:15:37.765 "progress": { 00:15:37.765 "blocks": 20480, 00:15:37.765 "percent": 32 00:15:37.765 } 00:15:37.765 }, 00:15:37.765 "base_bdevs_list": [ 00:15:37.765 { 00:15:37.765 "name": "spare", 00:15:37.765 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:37.765 "is_configured": true, 00:15:37.765 "data_offset": 2048, 00:15:37.765 "data_size": 63488 00:15:37.765 }, 00:15:37.765 { 00:15:37.765 "name": "BaseBdev2", 00:15:37.765 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:37.765 "is_configured": true, 00:15:37.765 "data_offset": 2048, 00:15:37.765 "data_size": 63488 00:15:37.765 } 00:15:37.765 ] 00:15:37.765 }' 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.765 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.024 [2024-09-27 22:32:33.651838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.024 [2024-09-27 22:32:33.706697] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:38.024 [2024-09-27 22:32:33.706801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.024 [2024-09-27 22:32:33.706819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.024 [2024-09-27 22:32:33.706832] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.024 "name": "raid_bdev1", 00:15:38.024 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:38.024 "strip_size_kb": 0, 00:15:38.024 "state": "online", 00:15:38.024 "raid_level": "raid1", 00:15:38.024 "superblock": true, 00:15:38.024 "num_base_bdevs": 2, 00:15:38.024 "num_base_bdevs_discovered": 1, 00:15:38.024 "num_base_bdevs_operational": 1, 00:15:38.024 "base_bdevs_list": [ 00:15:38.024 { 00:15:38.024 "name": null, 00:15:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.024 "is_configured": false, 00:15:38.024 "data_offset": 0, 00:15:38.024 "data_size": 63488 00:15:38.024 }, 00:15:38.024 { 00:15:38.024 "name": "BaseBdev2", 00:15:38.024 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:38.024 "is_configured": true, 00:15:38.024 "data_offset": 2048, 00:15:38.024 "data_size": 63488 00:15:38.024 } 00:15:38.024 ] 00:15:38.024 }' 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.024 22:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.601 "name": "raid_bdev1", 00:15:38.601 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:38.601 "strip_size_kb": 0, 00:15:38.601 "state": "online", 00:15:38.601 "raid_level": "raid1", 00:15:38.601 "superblock": true, 00:15:38.601 "num_base_bdevs": 2, 00:15:38.601 "num_base_bdevs_discovered": 1, 00:15:38.601 "num_base_bdevs_operational": 1, 00:15:38.601 "base_bdevs_list": [ 00:15:38.601 { 00:15:38.601 "name": null, 00:15:38.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.601 "is_configured": false, 00:15:38.601 "data_offset": 0, 00:15:38.601 "data_size": 63488 00:15:38.601 }, 00:15:38.601 { 00:15:38.601 "name": "BaseBdev2", 00:15:38.601 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:38.601 "is_configured": true, 00:15:38.601 "data_offset": 2048, 00:15:38.601 "data_size": 63488 00:15:38.601 } 00:15:38.601 ] 00:15:38.601 }' 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.601 [2024-09-27 22:32:34.333755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.601 [2024-09-27 22:32:34.354409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.601 22:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:38.601 [2024-09-27 22:32:34.356973] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.584 "name": "raid_bdev1", 00:15:39.584 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:39.584 "strip_size_kb": 0, 00:15:39.584 "state": "online", 00:15:39.584 "raid_level": "raid1", 00:15:39.584 "superblock": true, 00:15:39.584 "num_base_bdevs": 2, 00:15:39.584 "num_base_bdevs_discovered": 2, 00:15:39.584 "num_base_bdevs_operational": 2, 00:15:39.584 "process": { 00:15:39.584 "type": "rebuild", 00:15:39.584 "target": "spare", 00:15:39.584 "progress": { 00:15:39.584 "blocks": 20480, 00:15:39.584 "percent": 32 00:15:39.584 } 00:15:39.584 }, 00:15:39.584 "base_bdevs_list": [ 00:15:39.584 { 00:15:39.584 "name": "spare", 00:15:39.584 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:39.584 "is_configured": true, 00:15:39.584 "data_offset": 2048, 00:15:39.584 "data_size": 63488 00:15:39.584 }, 00:15:39.584 { 00:15:39.584 "name": "BaseBdev2", 00:15:39.584 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:39.584 "is_configured": true, 00:15:39.584 "data_offset": 2048, 00:15:39.584 "data_size": 63488 00:15:39.584 } 00:15:39.584 ] 00:15:39.584 }' 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.584 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:39.843 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.843 "name": "raid_bdev1", 00:15:39.843 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:39.843 "strip_size_kb": 0, 00:15:39.843 "state": "online", 00:15:39.843 "raid_level": "raid1", 00:15:39.843 "superblock": true, 00:15:39.843 "num_base_bdevs": 2, 00:15:39.843 "num_base_bdevs_discovered": 2, 00:15:39.843 "num_base_bdevs_operational": 2, 00:15:39.843 "process": { 00:15:39.843 "type": "rebuild", 00:15:39.843 "target": "spare", 00:15:39.843 "progress": { 00:15:39.843 "blocks": 22528, 00:15:39.843 "percent": 35 00:15:39.843 } 00:15:39.843 }, 00:15:39.843 "base_bdevs_list": [ 00:15:39.843 { 00:15:39.843 "name": "spare", 00:15:39.843 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:39.843 "is_configured": true, 00:15:39.843 "data_offset": 2048, 00:15:39.843 "data_size": 63488 00:15:39.843 }, 00:15:39.843 { 00:15:39.843 "name": "BaseBdev2", 00:15:39.843 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:39.843 "is_configured": true, 00:15:39.843 "data_offset": 2048, 00:15:39.843 "data_size": 63488 00:15:39.843 } 00:15:39.843 ] 00:15:39.843 }' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.843 22:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.786 22:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.045 "name": "raid_bdev1", 00:15:41.045 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:41.045 "strip_size_kb": 0, 00:15:41.045 "state": "online", 00:15:41.045 "raid_level": "raid1", 00:15:41.045 "superblock": true, 00:15:41.045 "num_base_bdevs": 2, 00:15:41.045 "num_base_bdevs_discovered": 2, 00:15:41.045 "num_base_bdevs_operational": 2, 00:15:41.045 "process": { 00:15:41.045 "type": "rebuild", 00:15:41.045 "target": "spare", 00:15:41.045 "progress": { 00:15:41.045 "blocks": 45056, 00:15:41.045 "percent": 70 00:15:41.045 } 00:15:41.045 }, 00:15:41.045 "base_bdevs_list": [ 00:15:41.045 { 00:15:41.045 "name": "spare", 00:15:41.045 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:41.045 "is_configured": true, 00:15:41.045 "data_offset": 2048, 00:15:41.045 "data_size": 63488 00:15:41.045 }, 00:15:41.045 { 00:15:41.045 "name": "BaseBdev2", 00:15:41.045 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:41.045 "is_configured": true, 00:15:41.045 "data_offset": 2048, 00:15:41.045 "data_size": 63488 00:15:41.045 } 00:15:41.045 ] 00:15:41.045 }' 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.045 22:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.612 [2024-09-27 22:32:37.471781] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:41.613 [2024-09-27 22:32:37.471883] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:41.613 [2024-09-27 22:32:37.472057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.181 "name": "raid_bdev1", 00:15:42.181 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:42.181 "strip_size_kb": 0, 00:15:42.181 "state": "online", 00:15:42.181 "raid_level": "raid1", 00:15:42.181 "superblock": true, 00:15:42.181 "num_base_bdevs": 2, 00:15:42.181 "num_base_bdevs_discovered": 2, 00:15:42.181 "num_base_bdevs_operational": 2, 00:15:42.181 "base_bdevs_list": [ 00:15:42.181 { 00:15:42.181 "name": "spare", 00:15:42.181 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:42.181 "is_configured": true, 00:15:42.181 "data_offset": 2048, 00:15:42.181 "data_size": 63488 00:15:42.181 }, 00:15:42.181 { 00:15:42.181 "name": "BaseBdev2", 00:15:42.181 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:42.181 "is_configured": true, 00:15:42.181 "data_offset": 2048, 00:15:42.181 "data_size": 63488 00:15:42.181 } 00:15:42.181 ] 00:15:42.181 }' 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.181 "name": "raid_bdev1", 00:15:42.181 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:42.181 "strip_size_kb": 0, 00:15:42.181 "state": "online", 00:15:42.181 "raid_level": "raid1", 00:15:42.181 "superblock": true, 00:15:42.181 "num_base_bdevs": 2, 00:15:42.181 "num_base_bdevs_discovered": 2, 00:15:42.181 "num_base_bdevs_operational": 2, 00:15:42.181 "base_bdevs_list": [ 00:15:42.181 { 00:15:42.181 "name": "spare", 00:15:42.181 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:42.181 "is_configured": true, 00:15:42.181 "data_offset": 2048, 00:15:42.181 "data_size": 63488 00:15:42.181 }, 00:15:42.181 { 00:15:42.181 "name": "BaseBdev2", 00:15:42.181 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:42.181 "is_configured": true, 00:15:42.181 "data_offset": 2048, 00:15:42.181 "data_size": 63488 00:15:42.181 } 00:15:42.181 ] 00:15:42.181 }' 00:15:42.181 22:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.181 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.181 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.440 "name": "raid_bdev1", 00:15:42.440 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:42.440 "strip_size_kb": 0, 00:15:42.440 "state": "online", 00:15:42.440 "raid_level": "raid1", 00:15:42.440 "superblock": true, 00:15:42.440 "num_base_bdevs": 2, 00:15:42.440 "num_base_bdevs_discovered": 2, 00:15:42.440 "num_base_bdevs_operational": 2, 00:15:42.440 "base_bdevs_list": [ 00:15:42.440 { 00:15:42.440 "name": "spare", 00:15:42.440 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:42.440 "is_configured": true, 00:15:42.440 "data_offset": 2048, 00:15:42.440 "data_size": 63488 00:15:42.440 }, 00:15:42.440 { 00:15:42.440 "name": "BaseBdev2", 00:15:42.440 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:42.440 "is_configured": true, 00:15:42.440 "data_offset": 2048, 00:15:42.440 "data_size": 63488 00:15:42.440 } 00:15:42.440 ] 00:15:42.440 }' 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.440 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.699 [2024-09-27 22:32:38.485550] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.699 [2024-09-27 22:32:38.485589] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.699 [2024-09-27 22:32:38.485681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.699 [2024-09-27 22:32:38.485754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.699 [2024-09-27 22:32:38.485767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.699 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:42.958 /dev/nbd0 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.958 1+0 records in 00:15:42.958 1+0 records out 00:15:42.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467195 s, 8.8 MB/s 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.958 22:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:43.217 /dev/nbd1 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:43.217 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.476 1+0 records in 00:15:43.476 1+0 records out 00:15:43.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494913 s, 8.3 MB/s 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.476 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.745 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.004 [2024-09-27 22:32:39.804812] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.004 [2024-09-27 22:32:39.804891] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.004 [2024-09-27 22:32:39.804924] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:44.004 [2024-09-27 22:32:39.804937] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.004 [2024-09-27 22:32:39.807623] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.004 [2024-09-27 22:32:39.807684] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.004 [2024-09-27 22:32:39.807798] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:44.004 [2024-09-27 22:32:39.807862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.004 [2024-09-27 22:32:39.808027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.004 spare 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.004 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.263 [2024-09-27 22:32:39.907969] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:44.263 [2024-09-27 22:32:39.908046] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:44.263 [2024-09-27 22:32:39.908433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:44.263 [2024-09-27 22:32:39.908659] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:44.263 [2024-09-27 22:32:39.908672] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:44.263 [2024-09-27 22:32:39.908888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.263 "name": "raid_bdev1", 00:15:44.263 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:44.263 "strip_size_kb": 0, 00:15:44.263 "state": "online", 00:15:44.263 "raid_level": "raid1", 00:15:44.263 "superblock": true, 00:15:44.263 "num_base_bdevs": 2, 00:15:44.263 "num_base_bdevs_discovered": 2, 00:15:44.263 "num_base_bdevs_operational": 2, 00:15:44.263 "base_bdevs_list": [ 00:15:44.263 { 00:15:44.263 "name": "spare", 00:15:44.263 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:44.263 "is_configured": true, 00:15:44.263 "data_offset": 2048, 00:15:44.263 "data_size": 63488 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "name": "BaseBdev2", 00:15:44.263 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:44.263 "is_configured": true, 00:15:44.263 "data_offset": 2048, 00:15:44.263 "data_size": 63488 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }' 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.263 22:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.521 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.780 "name": "raid_bdev1", 00:15:44.780 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:44.780 "strip_size_kb": 0, 00:15:44.780 "state": "online", 00:15:44.780 "raid_level": "raid1", 00:15:44.780 "superblock": true, 00:15:44.780 "num_base_bdevs": 2, 00:15:44.780 "num_base_bdevs_discovered": 2, 00:15:44.780 "num_base_bdevs_operational": 2, 00:15:44.780 "base_bdevs_list": [ 00:15:44.780 { 00:15:44.780 "name": "spare", 00:15:44.780 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:44.780 "is_configured": true, 00:15:44.780 "data_offset": 2048, 00:15:44.780 "data_size": 63488 00:15:44.780 }, 00:15:44.780 { 00:15:44.780 "name": "BaseBdev2", 00:15:44.780 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:44.780 "is_configured": true, 00:15:44.780 "data_offset": 2048, 00:15:44.780 "data_size": 63488 00:15:44.780 } 00:15:44.780 ] 00:15:44.780 }' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.780 [2024-09-27 22:32:40.592120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.780 "name": "raid_bdev1", 00:15:44.780 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:44.780 "strip_size_kb": 0, 00:15:44.780 "state": "online", 00:15:44.780 "raid_level": "raid1", 00:15:44.780 "superblock": true, 00:15:44.780 "num_base_bdevs": 2, 00:15:44.780 "num_base_bdevs_discovered": 1, 00:15:44.780 "num_base_bdevs_operational": 1, 00:15:44.780 "base_bdevs_list": [ 00:15:44.780 { 00:15:44.780 "name": null, 00:15:44.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.780 "is_configured": false, 00:15:44.780 "data_offset": 0, 00:15:44.780 "data_size": 63488 00:15:44.780 }, 00:15:44.780 { 00:15:44.780 "name": "BaseBdev2", 00:15:44.780 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:44.780 "is_configured": true, 00:15:44.780 "data_offset": 2048, 00:15:44.780 "data_size": 63488 00:15:44.780 } 00:15:44.780 ] 00:15:44.780 }' 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.780 22:32:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.347 22:32:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.347 22:32:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.347 22:32:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.347 [2024-09-27 22:32:41.051842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.347 [2024-09-27 22:32:41.052062] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.347 [2024-09-27 22:32:41.052104] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:45.347 [2024-09-27 22:32:41.052156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.347 [2024-09-27 22:32:41.072515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:45.347 22:32:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.347 22:32:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:45.347 [2024-09-27 22:32:41.074897] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.283 "name": "raid_bdev1", 00:15:46.283 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:46.283 "strip_size_kb": 0, 00:15:46.283 "state": "online", 00:15:46.283 "raid_level": "raid1", 00:15:46.283 "superblock": true, 00:15:46.283 "num_base_bdevs": 2, 00:15:46.283 "num_base_bdevs_discovered": 2, 00:15:46.283 "num_base_bdevs_operational": 2, 00:15:46.283 "process": { 00:15:46.283 "type": "rebuild", 00:15:46.283 "target": "spare", 00:15:46.283 "progress": { 00:15:46.283 "blocks": 20480, 00:15:46.283 "percent": 32 00:15:46.283 } 00:15:46.283 }, 00:15:46.283 "base_bdevs_list": [ 00:15:46.283 { 00:15:46.283 "name": "spare", 00:15:46.283 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:46.283 "is_configured": true, 00:15:46.283 "data_offset": 2048, 00:15:46.283 "data_size": 63488 00:15:46.283 }, 00:15:46.283 { 00:15:46.283 "name": "BaseBdev2", 00:15:46.283 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:46.283 "is_configured": true, 00:15:46.283 "data_offset": 2048, 00:15:46.283 "data_size": 63488 00:15:46.283 } 00:15:46.283 ] 00:15:46.283 }' 00:15:46.283 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.542 [2024-09-27 22:32:42.210097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.542 [2024-09-27 22:32:42.280705] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.542 [2024-09-27 22:32:42.281090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.542 [2024-09-27 22:32:42.281213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.542 [2024-09-27 22:32:42.281259] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.542 "name": "raid_bdev1", 00:15:46.542 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:46.542 "strip_size_kb": 0, 00:15:46.542 "state": "online", 00:15:46.542 "raid_level": "raid1", 00:15:46.542 "superblock": true, 00:15:46.542 "num_base_bdevs": 2, 00:15:46.542 "num_base_bdevs_discovered": 1, 00:15:46.542 "num_base_bdevs_operational": 1, 00:15:46.542 "base_bdevs_list": [ 00:15:46.542 { 00:15:46.542 "name": null, 00:15:46.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.542 "is_configured": false, 00:15:46.542 "data_offset": 0, 00:15:46.542 "data_size": 63488 00:15:46.542 }, 00:15:46.542 { 00:15:46.542 "name": "BaseBdev2", 00:15:46.542 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:46.542 "is_configured": true, 00:15:46.542 "data_offset": 2048, 00:15:46.542 "data_size": 63488 00:15:46.542 } 00:15:46.542 ] 00:15:46.542 }' 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.542 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.110 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:47.110 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.110 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.110 [2024-09-27 22:32:42.772248] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:47.110 [2024-09-27 22:32:42.772499] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.110 [2024-09-27 22:32:42.772533] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:47.110 [2024-09-27 22:32:42.772549] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.110 [2024-09-27 22:32:42.773096] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.110 [2024-09-27 22:32:42.773127] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:47.110 [2024-09-27 22:32:42.773242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:47.110 [2024-09-27 22:32:42.773259] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.110 [2024-09-27 22:32:42.773271] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:47.110 [2024-09-27 22:32:42.773302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.110 [2024-09-27 22:32:42.793289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:47.110 spare 00:15:47.110 22:32:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.110 [2024-09-27 22:32:42.796007] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.110 22:32:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.049 "name": "raid_bdev1", 00:15:48.049 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:48.049 "strip_size_kb": 0, 00:15:48.049 "state": "online", 00:15:48.049 "raid_level": "raid1", 00:15:48.049 "superblock": true, 00:15:48.049 "num_base_bdevs": 2, 00:15:48.049 "num_base_bdevs_discovered": 2, 00:15:48.049 "num_base_bdevs_operational": 2, 00:15:48.049 "process": { 00:15:48.049 "type": "rebuild", 00:15:48.049 "target": "spare", 00:15:48.049 "progress": { 00:15:48.049 "blocks": 20480, 00:15:48.049 "percent": 32 00:15:48.049 } 00:15:48.049 }, 00:15:48.049 "base_bdevs_list": [ 00:15:48.049 { 00:15:48.049 "name": "spare", 00:15:48.049 "uuid": "5cf9a693-6ba3-5c3a-8ee2-61a086e105e4", 00:15:48.049 "is_configured": true, 00:15:48.049 "data_offset": 2048, 00:15:48.049 "data_size": 63488 00:15:48.049 }, 00:15:48.049 { 00:15:48.049 "name": "BaseBdev2", 00:15:48.049 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:48.049 "is_configured": true, 00:15:48.049 "data_offset": 2048, 00:15:48.049 "data_size": 63488 00:15:48.049 } 00:15:48.049 ] 00:15:48.049 }' 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.049 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.308 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.308 22:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.308 22:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.308 22:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.308 [2024-09-27 22:32:43.947885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.308 [2024-09-27 22:32:44.002355] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.308 [2024-09-27 22:32:44.002719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.308 [2024-09-27 22:32:44.002851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.308 [2024-09-27 22:32:44.002896] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.308 "name": "raid_bdev1", 00:15:48.308 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:48.308 "strip_size_kb": 0, 00:15:48.308 "state": "online", 00:15:48.308 "raid_level": "raid1", 00:15:48.308 "superblock": true, 00:15:48.308 "num_base_bdevs": 2, 00:15:48.308 "num_base_bdevs_discovered": 1, 00:15:48.308 "num_base_bdevs_operational": 1, 00:15:48.308 "base_bdevs_list": [ 00:15:48.308 { 00:15:48.308 "name": null, 00:15:48.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.308 "is_configured": false, 00:15:48.308 "data_offset": 0, 00:15:48.308 "data_size": 63488 00:15:48.308 }, 00:15:48.308 { 00:15:48.308 "name": "BaseBdev2", 00:15:48.308 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:48.308 "is_configured": true, 00:15:48.308 "data_offset": 2048, 00:15:48.308 "data_size": 63488 00:15:48.308 } 00:15:48.308 ] 00:15:48.308 }' 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.308 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.900 "name": "raid_bdev1", 00:15:48.900 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:48.900 "strip_size_kb": 0, 00:15:48.900 "state": "online", 00:15:48.900 "raid_level": "raid1", 00:15:48.900 "superblock": true, 00:15:48.900 "num_base_bdevs": 2, 00:15:48.900 "num_base_bdevs_discovered": 1, 00:15:48.900 "num_base_bdevs_operational": 1, 00:15:48.900 "base_bdevs_list": [ 00:15:48.900 { 00:15:48.900 "name": null, 00:15:48.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.900 "is_configured": false, 00:15:48.900 "data_offset": 0, 00:15:48.900 "data_size": 63488 00:15:48.900 }, 00:15:48.900 { 00:15:48.900 "name": "BaseBdev2", 00:15:48.900 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:48.900 "is_configured": true, 00:15:48.900 "data_offset": 2048, 00:15:48.900 "data_size": 63488 00:15:48.900 } 00:15:48.900 ] 00:15:48.900 }' 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.900 [2024-09-27 22:32:44.677431] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.900 [2024-09-27 22:32:44.677727] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.900 [2024-09-27 22:32:44.677775] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:48.900 [2024-09-27 22:32:44.677790] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.900 [2024-09-27 22:32:44.678319] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.900 [2024-09-27 22:32:44.678340] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.900 [2024-09-27 22:32:44.678437] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:48.900 [2024-09-27 22:32:44.678453] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.900 [2024-09-27 22:32:44.678467] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:48.900 [2024-09-27 22:32:44.678483] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:48.900 BaseBdev1 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.900 22:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.837 22:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.096 22:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.096 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.096 "name": "raid_bdev1", 00:15:50.096 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:50.096 "strip_size_kb": 0, 00:15:50.096 "state": "online", 00:15:50.096 "raid_level": "raid1", 00:15:50.096 "superblock": true, 00:15:50.096 "num_base_bdevs": 2, 00:15:50.096 "num_base_bdevs_discovered": 1, 00:15:50.096 "num_base_bdevs_operational": 1, 00:15:50.096 "base_bdevs_list": [ 00:15:50.096 { 00:15:50.096 "name": null, 00:15:50.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.096 "is_configured": false, 00:15:50.096 "data_offset": 0, 00:15:50.096 "data_size": 63488 00:15:50.096 }, 00:15:50.096 { 00:15:50.096 "name": "BaseBdev2", 00:15:50.096 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:50.096 "is_configured": true, 00:15:50.096 "data_offset": 2048, 00:15:50.096 "data_size": 63488 00:15:50.096 } 00:15:50.096 ] 00:15:50.096 }' 00:15:50.096 22:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.096 22:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.355 "name": "raid_bdev1", 00:15:50.355 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:50.355 "strip_size_kb": 0, 00:15:50.355 "state": "online", 00:15:50.355 "raid_level": "raid1", 00:15:50.355 "superblock": true, 00:15:50.355 "num_base_bdevs": 2, 00:15:50.355 "num_base_bdevs_discovered": 1, 00:15:50.355 "num_base_bdevs_operational": 1, 00:15:50.355 "base_bdevs_list": [ 00:15:50.355 { 00:15:50.355 "name": null, 00:15:50.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.355 "is_configured": false, 00:15:50.355 "data_offset": 0, 00:15:50.355 "data_size": 63488 00:15:50.355 }, 00:15:50.355 { 00:15:50.355 "name": "BaseBdev2", 00:15:50.355 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:50.355 "is_configured": true, 00:15:50.355 "data_offset": 2048, 00:15:50.355 "data_size": 63488 00:15:50.355 } 00:15:50.355 ] 00:15:50.355 }' 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.355 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.614 [2024-09-27 22:32:46.283890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.614 [2024-09-27 22:32:46.284220] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.614 [2024-09-27 22:32:46.284263] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.614 request: 00:15:50.614 { 00:15:50.614 "base_bdev": "BaseBdev1", 00:15:50.614 "raid_bdev": "raid_bdev1", 00:15:50.614 "method": "bdev_raid_add_base_bdev", 00:15:50.614 "req_id": 1 00:15:50.614 } 00:15:50.614 Got JSON-RPC error response 00:15:50.614 response: 00:15:50.614 { 00:15:50.614 "code": -22, 00:15:50.614 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:50.614 } 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.614 22:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.631 "name": "raid_bdev1", 00:15:51.631 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:51.631 "strip_size_kb": 0, 00:15:51.631 "state": "online", 00:15:51.631 "raid_level": "raid1", 00:15:51.631 "superblock": true, 00:15:51.631 "num_base_bdevs": 2, 00:15:51.631 "num_base_bdevs_discovered": 1, 00:15:51.631 "num_base_bdevs_operational": 1, 00:15:51.631 "base_bdevs_list": [ 00:15:51.631 { 00:15:51.631 "name": null, 00:15:51.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.631 "is_configured": false, 00:15:51.631 "data_offset": 0, 00:15:51.631 "data_size": 63488 00:15:51.631 }, 00:15:51.631 { 00:15:51.631 "name": "BaseBdev2", 00:15:51.631 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:51.631 "is_configured": true, 00:15:51.631 "data_offset": 2048, 00:15:51.631 "data_size": 63488 00:15:51.631 } 00:15:51.631 ] 00:15:51.631 }' 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.631 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.890 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.890 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.890 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.890 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.148 "name": "raid_bdev1", 00:15:52.148 "uuid": "23f857f9-13ff-4ce4-9c10-f7828eda3312", 00:15:52.148 "strip_size_kb": 0, 00:15:52.148 "state": "online", 00:15:52.148 "raid_level": "raid1", 00:15:52.148 "superblock": true, 00:15:52.148 "num_base_bdevs": 2, 00:15:52.148 "num_base_bdevs_discovered": 1, 00:15:52.148 "num_base_bdevs_operational": 1, 00:15:52.148 "base_bdevs_list": [ 00:15:52.148 { 00:15:52.148 "name": null, 00:15:52.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.148 "is_configured": false, 00:15:52.148 "data_offset": 0, 00:15:52.148 "data_size": 63488 00:15:52.148 }, 00:15:52.148 { 00:15:52.148 "name": "BaseBdev2", 00:15:52.148 "uuid": "42673fa7-39f8-5168-8ba0-3bbacf150ae0", 00:15:52.148 "is_configured": true, 00:15:52.148 "data_offset": 2048, 00:15:52.148 "data_size": 63488 00:15:52.148 } 00:15:52.148 ] 00:15:52.148 }' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76601 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76601 ']' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 76601 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76601 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.148 killing process with pid 76601 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76601' 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 76601 00:15:52.148 Received shutdown signal, test time was about 60.000000 seconds 00:15:52.148 00:15:52.148 Latency(us) 00:15:52.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.148 =================================================================================================================== 00:15:52.148 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.148 [2024-09-27 22:32:47.949396] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.148 22:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 76601 00:15:52.148 [2024-09-27 22:32:47.949571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.148 [2024-09-27 22:32:47.949624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.148 [2024-09-27 22:32:47.949638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:52.716 [2024-09-27 22:32:48.293754] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.620 22:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:54.620 00:15:54.620 real 0m26.221s 00:15:54.620 user 0m30.840s 00:15:54.620 sys 0m4.881s 00:15:54.620 ************************************ 00:15:54.620 END TEST raid_rebuild_test_sb 00:15:54.620 ************************************ 00:15:54.620 22:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.620 22:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.878 22:32:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:54.878 22:32:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:54.878 22:32:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.878 22:32:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.879 ************************************ 00:15:54.879 START TEST raid_rebuild_test_io 00:15:54.879 ************************************ 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77354 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77354 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 77354 ']' 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.879 22:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.879 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.879 Zero copy mechanism will not be used. 00:15:54.879 [2024-09-27 22:32:50.641021] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:15:54.879 [2024-09-27 22:32:50.641150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77354 ] 00:15:55.138 [2024-09-27 22:32:50.816150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.398 [2024-09-27 22:32:51.069081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.709 [2024-09-27 22:32:51.330309] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.709 [2024-09-27 22:32:51.330575] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.968 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.968 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:15:55.968 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.968 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.968 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.968 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 BaseBdev1_malloc 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 [2024-09-27 22:32:51.897608] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:56.228 [2024-09-27 22:32:51.897890] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.228 [2024-09-27 22:32:51.897959] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.228 [2024-09-27 22:32:51.898091] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.228 [2024-09-27 22:32:51.900854] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.228 [2024-09-27 22:32:51.901059] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.228 BaseBdev1 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 BaseBdev2_malloc 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 [2024-09-27 22:32:51.963145] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:56.228 [2024-09-27 22:32:51.963242] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.228 [2024-09-27 22:32:51.963275] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.228 [2024-09-27 22:32:51.963292] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.228 [2024-09-27 22:32:51.966010] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.228 [2024-09-27 22:32:51.966062] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:56.228 BaseBdev2 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 spare_malloc 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 spare_delay 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 [2024-09-27 22:32:52.040025] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:56.228 [2024-09-27 22:32:52.040121] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.228 [2024-09-27 22:32:52.040149] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:56.228 [2024-09-27 22:32:52.040165] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.228 [2024-09-27 22:32:52.042877] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.228 [2024-09-27 22:32:52.042928] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:56.228 spare 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 [2024-09-27 22:32:52.052049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.228 [2024-09-27 22:32:52.054335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.228 [2024-09-27 22:32:52.054616] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:56.228 [2024-09-27 22:32:52.054639] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:56.228 [2024-09-27 22:32:52.054996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:56.228 [2024-09-27 22:32:52.055173] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:56.228 [2024-09-27 22:32:52.055185] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:56.228 [2024-09-27 22:32:52.055373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.228 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.487 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.487 "name": "raid_bdev1", 00:15:56.487 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:15:56.487 "strip_size_kb": 0, 00:15:56.487 "state": "online", 00:15:56.487 "raid_level": "raid1", 00:15:56.487 "superblock": false, 00:15:56.487 "num_base_bdevs": 2, 00:15:56.487 "num_base_bdevs_discovered": 2, 00:15:56.487 "num_base_bdevs_operational": 2, 00:15:56.487 "base_bdevs_list": [ 00:15:56.487 { 00:15:56.487 "name": "BaseBdev1", 00:15:56.487 "uuid": "6585e2ac-4d02-5c4d-a7c3-51df208674b6", 00:15:56.487 "is_configured": true, 00:15:56.487 "data_offset": 0, 00:15:56.487 "data_size": 65536 00:15:56.487 }, 00:15:56.487 { 00:15:56.487 "name": "BaseBdev2", 00:15:56.487 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:15:56.487 "is_configured": true, 00:15:56.487 "data_offset": 0, 00:15:56.487 "data_size": 65536 00:15:56.487 } 00:15:56.487 ] 00:15:56.487 }' 00:15:56.487 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.487 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 [2024-09-27 22:32:52.472186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 [2024-09-27 22:32:52.563861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.745 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.004 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.004 "name": "raid_bdev1", 00:15:57.004 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:15:57.004 "strip_size_kb": 0, 00:15:57.004 "state": "online", 00:15:57.004 "raid_level": "raid1", 00:15:57.004 "superblock": false, 00:15:57.004 "num_base_bdevs": 2, 00:15:57.004 "num_base_bdevs_discovered": 1, 00:15:57.004 "num_base_bdevs_operational": 1, 00:15:57.004 "base_bdevs_list": [ 00:15:57.004 { 00:15:57.004 "name": null, 00:15:57.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.004 "is_configured": false, 00:15:57.004 "data_offset": 0, 00:15:57.004 "data_size": 65536 00:15:57.004 }, 00:15:57.004 { 00:15:57.004 "name": "BaseBdev2", 00:15:57.004 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:15:57.004 "is_configured": true, 00:15:57.004 "data_offset": 0, 00:15:57.004 "data_size": 65536 00:15:57.004 } 00:15:57.004 ] 00:15:57.004 }' 00:15:57.004 22:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.004 22:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.004 [2024-09-27 22:32:52.670116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:57.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:57.004 Zero copy mechanism will not be used. 00:15:57.004 Running I/O for 60 seconds... 00:15:57.261 22:32:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.261 22:32:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.261 22:32:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.261 [2024-09-27 22:32:53.053065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.261 22:32:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.261 22:32:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.261 [2024-09-27 22:32:53.114156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:57.261 [2024-09-27 22:32:53.116581] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.519 [2024-09-27 22:32:53.226276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:57.519 [2024-09-27 22:32:53.226869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:57.777 [2024-09-27 22:32:53.466041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:57.777 [2024-09-27 22:32:53.466376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:58.035 181.00 IOPS, 543.00 MiB/s [2024-09-27 22:32:53.785622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:58.035 [2024-09-27 22:32:53.904524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.035 [2024-09-27 22:32:53.905466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.294 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.294 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.294 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.294 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.295 [2024-09-27 22:32:54.125901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.295 "name": "raid_bdev1", 00:15:58.295 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:15:58.295 "strip_size_kb": 0, 00:15:58.295 "state": "online", 00:15:58.295 "raid_level": "raid1", 00:15:58.295 "superblock": false, 00:15:58.295 "num_base_bdevs": 2, 00:15:58.295 "num_base_bdevs_discovered": 2, 00:15:58.295 "num_base_bdevs_operational": 2, 00:15:58.295 "process": { 00:15:58.295 "type": "rebuild", 00:15:58.295 "target": "spare", 00:15:58.295 "progress": { 00:15:58.295 "blocks": 12288, 00:15:58.295 "percent": 18 00:15:58.295 } 00:15:58.295 }, 00:15:58.295 "base_bdevs_list": [ 00:15:58.295 { 00:15:58.295 "name": "spare", 00:15:58.295 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:15:58.295 "is_configured": true, 00:15:58.295 "data_offset": 0, 00:15:58.295 "data_size": 65536 00:15:58.295 }, 00:15:58.295 { 00:15:58.295 "name": "BaseBdev2", 00:15:58.295 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:15:58.295 "is_configured": true, 00:15:58.295 "data_offset": 0, 00:15:58.295 "data_size": 65536 00:15:58.295 } 00:15:58.295 ] 00:15:58.295 }' 00:15:58.295 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.554 [2024-09-27 22:32:54.229713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.554 [2024-09-27 22:32:54.350266] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.554 [2024-09-27 22:32:54.359656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.554 [2024-09-27 22:32:54.359734] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.554 [2024-09-27 22:32:54.359754] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.554 [2024-09-27 22:32:54.409859] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.554 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.814 "name": "raid_bdev1", 00:15:58.814 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:15:58.814 "strip_size_kb": 0, 00:15:58.814 "state": "online", 00:15:58.814 "raid_level": "raid1", 00:15:58.814 "superblock": false, 00:15:58.814 "num_base_bdevs": 2, 00:15:58.814 "num_base_bdevs_discovered": 1, 00:15:58.814 "num_base_bdevs_operational": 1, 00:15:58.814 "base_bdevs_list": [ 00:15:58.814 { 00:15:58.814 "name": null, 00:15:58.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.814 "is_configured": false, 00:15:58.814 "data_offset": 0, 00:15:58.814 "data_size": 65536 00:15:58.814 }, 00:15:58.814 { 00:15:58.814 "name": "BaseBdev2", 00:15:58.814 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:15:58.814 "is_configured": true, 00:15:58.814 "data_offset": 0, 00:15:58.814 "data_size": 65536 00:15:58.814 } 00:15:58.814 ] 00:15:58.814 }' 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.814 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.072 143.50 IOPS, 430.50 MiB/s 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.072 "name": "raid_bdev1", 00:15:59.072 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:15:59.072 "strip_size_kb": 0, 00:15:59.072 "state": "online", 00:15:59.072 "raid_level": "raid1", 00:15:59.072 "superblock": false, 00:15:59.072 "num_base_bdevs": 2, 00:15:59.072 "num_base_bdevs_discovered": 1, 00:15:59.072 "num_base_bdevs_operational": 1, 00:15:59.072 "base_bdevs_list": [ 00:15:59.072 { 00:15:59.072 "name": null, 00:15:59.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.072 "is_configured": false, 00:15:59.072 "data_offset": 0, 00:15:59.072 "data_size": 65536 00:15:59.072 }, 00:15:59.072 { 00:15:59.072 "name": "BaseBdev2", 00:15:59.072 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:15:59.072 "is_configured": true, 00:15:59.072 "data_offset": 0, 00:15:59.072 "data_size": 65536 00:15:59.072 } 00:15:59.072 ] 00:15:59.072 }' 00:15:59.072 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.330 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.330 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.330 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.330 22:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.330 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.330 22:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.330 [2024-09-27 22:32:55.011514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.330 22:32:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.330 22:32:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.330 [2024-09-27 22:32:55.058232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:59.330 [2024-09-27 22:32:55.060642] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.331 [2024-09-27 22:32:55.173606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:59.589 [2024-09-27 22:32:55.296325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:59.589 [2024-09-27 22:32:55.296639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:59.846 [2024-09-27 22:32:55.537306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:00.108 159.33 IOPS, 478.00 MiB/s [2024-09-27 22:32:55.748004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:00.108 [2024-09-27 22:32:55.748575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:00.108 [2024-09-27 22:32:55.985398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.367 "name": "raid_bdev1", 00:16:00.367 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:00.367 "strip_size_kb": 0, 00:16:00.367 "state": "online", 00:16:00.367 "raid_level": "raid1", 00:16:00.367 "superblock": false, 00:16:00.367 "num_base_bdevs": 2, 00:16:00.367 "num_base_bdevs_discovered": 2, 00:16:00.367 "num_base_bdevs_operational": 2, 00:16:00.367 "process": { 00:16:00.367 "type": "rebuild", 00:16:00.367 "target": "spare", 00:16:00.367 "progress": { 00:16:00.367 "blocks": 14336, 00:16:00.367 "percent": 21 00:16:00.367 } 00:16:00.367 }, 00:16:00.367 "base_bdevs_list": [ 00:16:00.367 { 00:16:00.367 "name": "spare", 00:16:00.367 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:00.367 "is_configured": true, 00:16:00.367 "data_offset": 0, 00:16:00.367 "data_size": 65536 00:16:00.367 }, 00:16:00.367 { 00:16:00.367 "name": "BaseBdev2", 00:16:00.367 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:00.367 "is_configured": true, 00:16:00.367 "data_offset": 0, 00:16:00.367 "data_size": 65536 00:16:00.367 } 00:16:00.367 ] 00:16:00.367 }' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.367 "name": "raid_bdev1", 00:16:00.367 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:00.367 "strip_size_kb": 0, 00:16:00.367 "state": "online", 00:16:00.367 "raid_level": "raid1", 00:16:00.367 "superblock": false, 00:16:00.367 "num_base_bdevs": 2, 00:16:00.367 "num_base_bdevs_discovered": 2, 00:16:00.367 "num_base_bdevs_operational": 2, 00:16:00.367 "process": { 00:16:00.367 "type": "rebuild", 00:16:00.367 "target": "spare", 00:16:00.367 "progress": { 00:16:00.367 "blocks": 16384, 00:16:00.367 "percent": 25 00:16:00.367 } 00:16:00.367 }, 00:16:00.367 "base_bdevs_list": [ 00:16:00.367 { 00:16:00.367 "name": "spare", 00:16:00.367 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:00.367 "is_configured": true, 00:16:00.367 "data_offset": 0, 00:16:00.367 "data_size": 65536 00:16:00.367 }, 00:16:00.367 { 00:16:00.367 "name": "BaseBdev2", 00:16:00.367 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:00.367 "is_configured": true, 00:16:00.367 "data_offset": 0, 00:16:00.367 "data_size": 65536 00:16:00.367 } 00:16:00.367 ] 00:16:00.367 }' 00:16:00.367 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.626 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.626 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.626 [2024-09-27 22:32:56.323093] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:00.626 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.626 22:32:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.450 138.50 IOPS, 415.50 MiB/s [2024-09-27 22:32:57.084534] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:01.450 [2024-09-27 22:32:57.194080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.709 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.709 "name": "raid_bdev1", 00:16:01.709 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:01.709 "strip_size_kb": 0, 00:16:01.709 "state": "online", 00:16:01.709 "raid_level": "raid1", 00:16:01.709 "superblock": false, 00:16:01.709 "num_base_bdevs": 2, 00:16:01.709 "num_base_bdevs_discovered": 2, 00:16:01.709 "num_base_bdevs_operational": 2, 00:16:01.709 "process": { 00:16:01.709 "type": "rebuild", 00:16:01.709 "target": "spare", 00:16:01.709 "progress": { 00:16:01.709 "blocks": 34816, 00:16:01.709 "percent": 53 00:16:01.709 } 00:16:01.709 }, 00:16:01.709 "base_bdevs_list": [ 00:16:01.709 { 00:16:01.709 "name": "spare", 00:16:01.709 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:01.709 "is_configured": true, 00:16:01.709 "data_offset": 0, 00:16:01.709 "data_size": 65536 00:16:01.709 }, 00:16:01.710 { 00:16:01.710 "name": "BaseBdev2", 00:16:01.710 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:01.710 "is_configured": true, 00:16:01.710 "data_offset": 0, 00:16:01.710 "data_size": 65536 00:16:01.710 } 00:16:01.710 ] 00:16:01.710 }' 00:16:01.710 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.710 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.710 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.710 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.710 22:32:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.532 119.80 IOPS, 359.40 MiB/s [2024-09-27 22:32:58.327711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.791 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.791 "name": "raid_bdev1", 00:16:02.791 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:02.791 "strip_size_kb": 0, 00:16:02.791 "state": "online", 00:16:02.791 "raid_level": "raid1", 00:16:02.791 "superblock": false, 00:16:02.791 "num_base_bdevs": 2, 00:16:02.791 "num_base_bdevs_discovered": 2, 00:16:02.791 "num_base_bdevs_operational": 2, 00:16:02.791 "process": { 00:16:02.791 "type": "rebuild", 00:16:02.791 "target": "spare", 00:16:02.791 "progress": { 00:16:02.791 "blocks": 53248, 00:16:02.791 "percent": 81 00:16:02.791 } 00:16:02.791 }, 00:16:02.791 "base_bdevs_list": [ 00:16:02.792 { 00:16:02.792 "name": "spare", 00:16:02.792 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:02.792 "is_configured": true, 00:16:02.792 "data_offset": 0, 00:16:02.792 "data_size": 65536 00:16:02.792 }, 00:16:02.792 { 00:16:02.792 "name": "BaseBdev2", 00:16:02.792 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:02.792 "is_configured": true, 00:16:02.792 "data_offset": 0, 00:16:02.792 "data_size": 65536 00:16:02.792 } 00:16:02.792 ] 00:16:02.792 }' 00:16:02.792 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.792 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.792 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.792 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.792 22:32:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.792 [2024-09-27 22:32:58.658606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:03.615 106.00 IOPS, 318.00 MiB/s [2024-09-27 22:32:59.208882] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.615 [2024-09-27 22:32:59.315069] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.615 [2024-09-27 22:32:59.318528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.873 95.43 IOPS, 286.29 MiB/s 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.873 "name": "raid_bdev1", 00:16:03.873 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:03.873 "strip_size_kb": 0, 00:16:03.873 "state": "online", 00:16:03.873 "raid_level": "raid1", 00:16:03.873 "superblock": false, 00:16:03.873 "num_base_bdevs": 2, 00:16:03.873 "num_base_bdevs_discovered": 2, 00:16:03.873 "num_base_bdevs_operational": 2, 00:16:03.873 "base_bdevs_list": [ 00:16:03.873 { 00:16:03.873 "name": "spare", 00:16:03.873 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:03.873 "is_configured": true, 00:16:03.873 "data_offset": 0, 00:16:03.873 "data_size": 65536 00:16:03.873 }, 00:16:03.873 { 00:16:03.873 "name": "BaseBdev2", 00:16:03.873 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:03.873 "is_configured": true, 00:16:03.873 "data_offset": 0, 00:16:03.873 "data_size": 65536 00:16:03.873 } 00:16:03.873 ] 00:16:03.873 }' 00:16:03.873 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.132 "name": "raid_bdev1", 00:16:04.132 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:04.132 "strip_size_kb": 0, 00:16:04.132 "state": "online", 00:16:04.132 "raid_level": "raid1", 00:16:04.132 "superblock": false, 00:16:04.132 "num_base_bdevs": 2, 00:16:04.132 "num_base_bdevs_discovered": 2, 00:16:04.132 "num_base_bdevs_operational": 2, 00:16:04.132 "base_bdevs_list": [ 00:16:04.132 { 00:16:04.132 "name": "spare", 00:16:04.132 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:04.132 "is_configured": true, 00:16:04.132 "data_offset": 0, 00:16:04.132 "data_size": 65536 00:16:04.132 }, 00:16:04.132 { 00:16:04.132 "name": "BaseBdev2", 00:16:04.132 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:04.132 "is_configured": true, 00:16:04.132 "data_offset": 0, 00:16:04.132 "data_size": 65536 00:16:04.132 } 00:16:04.132 ] 00:16:04.132 }' 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.132 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.133 "name": "raid_bdev1", 00:16:04.133 "uuid": "3f534e9a-b4e4-4de1-a2a9-a45d62a68cbf", 00:16:04.133 "strip_size_kb": 0, 00:16:04.133 "state": "online", 00:16:04.133 "raid_level": "raid1", 00:16:04.133 "superblock": false, 00:16:04.133 "num_base_bdevs": 2, 00:16:04.133 "num_base_bdevs_discovered": 2, 00:16:04.133 "num_base_bdevs_operational": 2, 00:16:04.133 "base_bdevs_list": [ 00:16:04.133 { 00:16:04.133 "name": "spare", 00:16:04.133 "uuid": "18e8c4bb-3645-5b4d-b979-fae816386450", 00:16:04.133 "is_configured": true, 00:16:04.133 "data_offset": 0, 00:16:04.133 "data_size": 65536 00:16:04.133 }, 00:16:04.133 { 00:16:04.133 "name": "BaseBdev2", 00:16:04.133 "uuid": "90e3599e-d340-5a75-a2b3-20245ec13d81", 00:16:04.133 "is_configured": true, 00:16:04.133 "data_offset": 0, 00:16:04.133 "data_size": 65536 00:16:04.133 } 00:16:04.133 ] 00:16:04.133 }' 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.133 22:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.701 [2024-09-27 22:33:00.396793] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.701 [2024-09-27 22:33:00.397002] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.701 00:16:04.701 Latency(us) 00:16:04.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.701 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:04.701 raid_bdev1 : 7.84 88.98 266.94 0.00 0.00 15788.19 327.35 114543.24 00:16:04.701 =================================================================================================================== 00:16:04.701 Total : 88.98 266.94 0.00 0.00 15788.19 327.35 114543.24 00:16:04.701 [2024-09-27 22:33:00.530022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.701 { 00:16:04.701 "results": [ 00:16:04.701 { 00:16:04.701 "job": "raid_bdev1", 00:16:04.701 "core_mask": "0x1", 00:16:04.701 "workload": "randrw", 00:16:04.701 "percentage": 50, 00:16:04.701 "status": "finished", 00:16:04.701 "queue_depth": 2, 00:16:04.701 "io_size": 3145728, 00:16:04.701 "runtime": 7.844389, 00:16:04.701 "iops": 88.98079888694964, 00:16:04.701 "mibps": 266.94239666084894, 00:16:04.701 "io_failed": 0, 00:16:04.701 "io_timeout": 0, 00:16:04.701 "avg_latency_us": 15788.189687115224, 00:16:04.701 "min_latency_us": 327.3510040160643, 00:16:04.701 "max_latency_us": 114543.24176706828 00:16:04.701 } 00:16:04.701 ], 00:16:04.701 "core_count": 1 00:16:04.701 } 00:16:04.701 [2024-09-27 22:33:00.530334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.701 [2024-09-27 22:33:00.530450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.701 [2024-09-27 22:33:00.530465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.701 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.959 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:05.217 /dev/nbd0 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.217 1+0 records in 00:16:05.217 1+0 records out 00:16:05.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508827 s, 8.0 MB/s 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.217 22:33:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:05.476 /dev/nbd1 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.476 1+0 records in 00:16:05.476 1+0 records out 00:16:05.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481052 s, 8.5 MB/s 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.476 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.734 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.992 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.993 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.993 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.993 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.993 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:05.993 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.993 22:33:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77354 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 77354 ']' 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 77354 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77354 00:16:06.251 killing process with pid 77354 00:16:06.251 Received shutdown signal, test time was about 9.420194 seconds 00:16:06.251 00:16:06.251 Latency(us) 00:16:06.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.251 =================================================================================================================== 00:16:06.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77354' 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 77354 00:16:06.251 [2024-09-27 22:33:02.077907] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.251 22:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 77354 00:16:06.509 [2024-09-27 22:33:02.330071] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:09.049 00:16:09.049 real 0m14.008s 00:16:09.049 user 0m17.238s 00:16:09.049 sys 0m1.901s 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.049 ************************************ 00:16:09.049 END TEST raid_rebuild_test_io 00:16:09.049 ************************************ 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 22:33:04 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:09.049 22:33:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:09.049 22:33:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.049 22:33:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.049 ************************************ 00:16:09.049 START TEST raid_rebuild_test_sb_io 00:16:09.049 ************************************ 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:09.049 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77752 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77752 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77752 ']' 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.050 22:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.050 [2024-09-27 22:33:04.738268] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:16:09.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.050 Zero copy mechanism will not be used. 00:16:09.050 [2024-09-27 22:33:04.738616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77752 ] 00:16:09.050 [2024-09-27 22:33:04.914841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.309 [2024-09-27 22:33:05.168552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.567 [2024-09-27 22:33:05.429562] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.567 [2024-09-27 22:33:05.429798] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.135 BaseBdev1_malloc 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.135 [2024-09-27 22:33:05.969224] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:10.135 [2024-09-27 22:33:05.969317] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.135 [2024-09-27 22:33:05.969346] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:10.135 [2024-09-27 22:33:05.969365] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.135 [2024-09-27 22:33:05.972143] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.135 [2024-09-27 22:33:05.972213] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.135 BaseBdev1 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.135 22:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 BaseBdev2_malloc 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 [2024-09-27 22:33:06.032274] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:10.400 [2024-09-27 22:33:06.032529] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.400 [2024-09-27 22:33:06.032564] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:10.400 [2024-09-27 22:33:06.032600] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.400 [2024-09-27 22:33:06.035248] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.400 [2024-09-27 22:33:06.035295] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:10.400 BaseBdev2 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 spare_malloc 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 spare_delay 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 [2024-09-27 22:33:06.108225] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.400 [2024-09-27 22:33:06.108315] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.400 [2024-09-27 22:33:06.108341] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:10.400 [2024-09-27 22:33:06.108357] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.400 [2024-09-27 22:33:06.111049] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.400 [2024-09-27 22:33:06.111096] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.400 spare 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 [2024-09-27 22:33:06.120254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.400 [2024-09-27 22:33:06.122578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.400 [2024-09-27 22:33:06.122799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.400 [2024-09-27 22:33:06.122817] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:10.400 [2024-09-27 22:33:06.123149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:10.400 [2024-09-27 22:33:06.123351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.400 [2024-09-27 22:33:06.123362] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.400 [2024-09-27 22:33:06.123553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.400 "name": "raid_bdev1", 00:16:10.400 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:10.400 "strip_size_kb": 0, 00:16:10.400 "state": "online", 00:16:10.400 "raid_level": "raid1", 00:16:10.400 "superblock": true, 00:16:10.400 "num_base_bdevs": 2, 00:16:10.400 "num_base_bdevs_discovered": 2, 00:16:10.400 "num_base_bdevs_operational": 2, 00:16:10.400 "base_bdevs_list": [ 00:16:10.400 { 00:16:10.400 "name": "BaseBdev1", 00:16:10.400 "uuid": "dcc1f67c-c13f-5b75-b5c2-a21205783e91", 00:16:10.400 "is_configured": true, 00:16:10.400 "data_offset": 2048, 00:16:10.400 "data_size": 63488 00:16:10.400 }, 00:16:10.400 { 00:16:10.400 "name": "BaseBdev2", 00:16:10.400 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:10.400 "is_configured": true, 00:16:10.400 "data_offset": 2048, 00:16:10.400 "data_size": 63488 00:16:10.400 } 00:16:10.400 ] 00:16:10.400 }' 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.400 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 [2024-09-27 22:33:06.584187] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 [2024-09-27 22:33:06.667915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.969 "name": "raid_bdev1", 00:16:10.969 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:10.969 "strip_size_kb": 0, 00:16:10.969 "state": "online", 00:16:10.969 "raid_level": "raid1", 00:16:10.969 "superblock": true, 00:16:10.969 "num_base_bdevs": 2, 00:16:10.969 "num_base_bdevs_discovered": 1, 00:16:10.969 "num_base_bdevs_operational": 1, 00:16:10.969 "base_bdevs_list": [ 00:16:10.969 { 00:16:10.969 "name": null, 00:16:10.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.969 "is_configured": false, 00:16:10.969 "data_offset": 0, 00:16:10.969 "data_size": 63488 00:16:10.969 }, 00:16:10.969 { 00:16:10.969 "name": "BaseBdev2", 00:16:10.969 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 } 00:16:10.969 ] 00:16:10.969 }' 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.969 22:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.969 Zero copy mechanism will not be used. 00:16:10.969 Running I/O for 60 seconds... 00:16:10.969 [2024-09-27 22:33:06.778220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.229 22:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.229 22:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.229 22:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.229 [2024-09-27 22:33:07.061346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.489 22:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.489 22:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:11.489 [2024-09-27 22:33:07.135489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:11.489 [2024-09-27 22:33:07.138191] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.489 [2024-09-27 22:33:07.255027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:11.489 [2024-09-27 22:33:07.255938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:11.749 [2024-09-27 22:33:07.466529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:11.749 [2024-09-27 22:33:07.466868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:12.007 [2024-09-27 22:33:07.707341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:12.266 153.00 IOPS, 459.00 MiB/s [2024-09-27 22:33:07.936522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.266 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.267 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.525 "name": "raid_bdev1", 00:16:12.525 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:12.525 "strip_size_kb": 0, 00:16:12.525 "state": "online", 00:16:12.525 "raid_level": "raid1", 00:16:12.525 "superblock": true, 00:16:12.525 "num_base_bdevs": 2, 00:16:12.525 "num_base_bdevs_discovered": 2, 00:16:12.525 "num_base_bdevs_operational": 2, 00:16:12.525 "process": { 00:16:12.525 "type": "rebuild", 00:16:12.525 "target": "spare", 00:16:12.525 "progress": { 00:16:12.525 "blocks": 10240, 00:16:12.525 "percent": 16 00:16:12.525 } 00:16:12.525 }, 00:16:12.525 "base_bdevs_list": [ 00:16:12.525 { 00:16:12.525 "name": "spare", 00:16:12.525 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:12.525 "is_configured": true, 00:16:12.525 "data_offset": 2048, 00:16:12.525 "data_size": 63488 00:16:12.525 }, 00:16:12.525 { 00:16:12.525 "name": "BaseBdev2", 00:16:12.525 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:12.525 "is_configured": true, 00:16:12.525 "data_offset": 2048, 00:16:12.525 "data_size": 63488 00:16:12.525 } 00:16:12.525 ] 00:16:12.525 }' 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:12.525 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.526 [2024-09-27 22:33:08.268250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.526 [2024-09-27 22:33:08.285656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:12.526 [2024-09-27 22:33:08.308145] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.526 [2024-09-27 22:33:08.323761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.526 [2024-09-27 22:33:08.324134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.526 [2024-09-27 22:33:08.324189] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.526 [2024-09-27 22:33:08.367099] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.526 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.809 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.809 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.809 "name": "raid_bdev1", 00:16:12.809 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:12.809 "strip_size_kb": 0, 00:16:12.809 "state": "online", 00:16:12.809 "raid_level": "raid1", 00:16:12.809 "superblock": true, 00:16:12.809 "num_base_bdevs": 2, 00:16:12.809 "num_base_bdevs_discovered": 1, 00:16:12.809 "num_base_bdevs_operational": 1, 00:16:12.809 "base_bdevs_list": [ 00:16:12.809 { 00:16:12.809 "name": null, 00:16:12.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.809 "is_configured": false, 00:16:12.809 "data_offset": 0, 00:16:12.809 "data_size": 63488 00:16:12.809 }, 00:16:12.809 { 00:16:12.809 "name": "BaseBdev2", 00:16:12.809 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:12.809 "is_configured": true, 00:16:12.809 "data_offset": 2048, 00:16:12.809 "data_size": 63488 00:16:12.809 } 00:16:12.809 ] 00:16:12.809 }' 00:16:12.809 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.809 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.068 158.50 IOPS, 475.50 MiB/s 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.068 "name": "raid_bdev1", 00:16:13.068 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:13.068 "strip_size_kb": 0, 00:16:13.068 "state": "online", 00:16:13.068 "raid_level": "raid1", 00:16:13.068 "superblock": true, 00:16:13.068 "num_base_bdevs": 2, 00:16:13.068 "num_base_bdevs_discovered": 1, 00:16:13.068 "num_base_bdevs_operational": 1, 00:16:13.068 "base_bdevs_list": [ 00:16:13.068 { 00:16:13.068 "name": null, 00:16:13.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.068 "is_configured": false, 00:16:13.068 "data_offset": 0, 00:16:13.068 "data_size": 63488 00:16:13.068 }, 00:16:13.068 { 00:16:13.068 "name": "BaseBdev2", 00:16:13.068 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:13.068 "is_configured": true, 00:16:13.068 "data_offset": 2048, 00:16:13.068 "data_size": 63488 00:16:13.068 } 00:16:13.068 ] 00:16:13.068 }' 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.068 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.327 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.327 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.327 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.327 22:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.327 [2024-09-27 22:33:08.962525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.327 22:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.327 22:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:13.327 [2024-09-27 22:33:09.026649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:13.327 [2024-09-27 22:33:09.029095] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.586 [2024-09-27 22:33:09.288200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.586 [2024-09-27 22:33:09.288774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.845 [2024-09-27 22:33:09.635662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.845 [2024-09-27 22:33:09.636300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:14.104 160.33 IOPS, 481.00 MiB/s [2024-09-27 22:33:09.851781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:14.104 [2024-09-27 22:33:09.852166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.362 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.363 "name": "raid_bdev1", 00:16:14.363 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:14.363 "strip_size_kb": 0, 00:16:14.363 "state": "online", 00:16:14.363 "raid_level": "raid1", 00:16:14.363 "superblock": true, 00:16:14.363 "num_base_bdevs": 2, 00:16:14.363 "num_base_bdevs_discovered": 2, 00:16:14.363 "num_base_bdevs_operational": 2, 00:16:14.363 "process": { 00:16:14.363 "type": "rebuild", 00:16:14.363 "target": "spare", 00:16:14.363 "progress": { 00:16:14.363 "blocks": 10240, 00:16:14.363 "percent": 16 00:16:14.363 } 00:16:14.363 }, 00:16:14.363 "base_bdevs_list": [ 00:16:14.363 { 00:16:14.363 "name": "spare", 00:16:14.363 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:14.363 "is_configured": true, 00:16:14.363 "data_offset": 2048, 00:16:14.363 "data_size": 63488 00:16:14.363 }, 00:16:14.363 { 00:16:14.363 "name": "BaseBdev2", 00:16:14.363 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:14.363 "is_configured": true, 00:16:14.363 "data_offset": 2048, 00:16:14.363 "data_size": 63488 00:16:14.363 } 00:16:14.363 ] 00:16:14.363 }' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:14.363 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.363 "name": "raid_bdev1", 00:16:14.363 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:14.363 "strip_size_kb": 0, 00:16:14.363 "state": "online", 00:16:14.363 "raid_level": "raid1", 00:16:14.363 "superblock": true, 00:16:14.363 "num_base_bdevs": 2, 00:16:14.363 "num_base_bdevs_discovered": 2, 00:16:14.363 "num_base_bdevs_operational": 2, 00:16:14.363 "process": { 00:16:14.363 "type": "rebuild", 00:16:14.363 "target": "spare", 00:16:14.363 "progress": { 00:16:14.363 "blocks": 12288, 00:16:14.363 "percent": 19 00:16:14.363 } 00:16:14.363 }, 00:16:14.363 "base_bdevs_list": [ 00:16:14.363 { 00:16:14.363 "name": "spare", 00:16:14.363 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:14.363 "is_configured": true, 00:16:14.363 "data_offset": 2048, 00:16:14.363 "data_size": 63488 00:16:14.363 }, 00:16:14.363 { 00:16:14.363 "name": "BaseBdev2", 00:16:14.363 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:14.363 "is_configured": true, 00:16:14.363 "data_offset": 2048, 00:16:14.363 "data_size": 63488 00:16:14.363 } 00:16:14.363 ] 00:16:14.363 }' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.363 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.622 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.622 22:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.622 [2024-09-27 22:33:10.307914] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:14.622 [2024-09-27 22:33:10.308490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:15.191 142.50 IOPS, 427.50 MiB/s [2024-09-27 22:33:11.013710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:15.450 [2024-09-27 22:33:11.250125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.450 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.710 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.710 "name": "raid_bdev1", 00:16:15.710 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:15.710 "strip_size_kb": 0, 00:16:15.710 "state": "online", 00:16:15.710 "raid_level": "raid1", 00:16:15.710 "superblock": true, 00:16:15.710 "num_base_bdevs": 2, 00:16:15.710 "num_base_bdevs_discovered": 2, 00:16:15.710 "num_base_bdevs_operational": 2, 00:16:15.710 "process": { 00:16:15.710 "type": "rebuild", 00:16:15.710 "target": "spare", 00:16:15.710 "progress": { 00:16:15.710 "blocks": 28672, 00:16:15.710 "percent": 45 00:16:15.710 } 00:16:15.710 }, 00:16:15.710 "base_bdevs_list": [ 00:16:15.710 { 00:16:15.710 "name": "spare", 00:16:15.710 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:15.710 "is_configured": true, 00:16:15.710 "data_offset": 2048, 00:16:15.710 "data_size": 63488 00:16:15.710 }, 00:16:15.710 { 00:16:15.710 "name": "BaseBdev2", 00:16:15.710 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:15.710 "is_configured": true, 00:16:15.710 "data_offset": 2048, 00:16:15.710 "data_size": 63488 00:16:15.710 } 00:16:15.710 ] 00:16:15.710 }' 00:16:15.710 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.710 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.710 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.710 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.710 22:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.969 [2024-09-27 22:33:11.614891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:16.228 126.20 IOPS, 378.60 MiB/s [2024-09-27 22:33:11.856065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:16.228 [2024-09-27 22:33:12.061056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:16.228 [2024-09-27 22:33:12.061606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:16.796 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.796 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.796 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.797 "name": "raid_bdev1", 00:16:16.797 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:16.797 "strip_size_kb": 0, 00:16:16.797 "state": "online", 00:16:16.797 "raid_level": "raid1", 00:16:16.797 "superblock": true, 00:16:16.797 "num_base_bdevs": 2, 00:16:16.797 "num_base_bdevs_discovered": 2, 00:16:16.797 "num_base_bdevs_operational": 2, 00:16:16.797 "process": { 00:16:16.797 "type": "rebuild", 00:16:16.797 "target": "spare", 00:16:16.797 "progress": { 00:16:16.797 "blocks": 45056, 00:16:16.797 "percent": 70 00:16:16.797 } 00:16:16.797 }, 00:16:16.797 "base_bdevs_list": [ 00:16:16.797 { 00:16:16.797 "name": "spare", 00:16:16.797 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:16.797 "is_configured": true, 00:16:16.797 "data_offset": 2048, 00:16:16.797 "data_size": 63488 00:16:16.797 }, 00:16:16.797 { 00:16:16.797 "name": "BaseBdev2", 00:16:16.797 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:16.797 "is_configured": true, 00:16:16.797 "data_offset": 2048, 00:16:16.797 "data_size": 63488 00:16:16.797 } 00:16:16.797 ] 00:16:16.797 }' 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.797 22:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.056 [2024-09-27 22:33:12.751313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:17.314 113.33 IOPS, 340.00 MiB/s [2024-09-27 22:33:12.966635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.881 "name": "raid_bdev1", 00:16:17.881 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:17.881 "strip_size_kb": 0, 00:16:17.881 "state": "online", 00:16:17.881 "raid_level": "raid1", 00:16:17.881 "superblock": true, 00:16:17.881 "num_base_bdevs": 2, 00:16:17.881 "num_base_bdevs_discovered": 2, 00:16:17.881 "num_base_bdevs_operational": 2, 00:16:17.881 "process": { 00:16:17.881 "type": "rebuild", 00:16:17.881 "target": "spare", 00:16:17.881 "progress": { 00:16:17.881 "blocks": 61440, 00:16:17.881 "percent": 96 00:16:17.881 } 00:16:17.881 }, 00:16:17.881 "base_bdevs_list": [ 00:16:17.881 { 00:16:17.881 "name": "spare", 00:16:17.881 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:17.881 "is_configured": true, 00:16:17.881 "data_offset": 2048, 00:16:17.881 "data_size": 63488 00:16:17.881 }, 00:16:17.881 { 00:16:17.881 "name": "BaseBdev2", 00:16:17.881 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:17.881 "is_configured": true, 00:16:17.881 "data_offset": 2048, 00:16:17.881 "data_size": 63488 00:16:17.881 } 00:16:17.881 ] 00:16:17.881 }' 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.881 [2024-09-27 22:33:13.609475] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.881 22:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.881 [2024-09-27 22:33:13.715113] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:17.881 [2024-09-27 22:33:13.717833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.076 101.57 IOPS, 304.71 MiB/s 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.076 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.076 "name": "raid_bdev1", 00:16:19.076 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:19.076 "strip_size_kb": 0, 00:16:19.076 "state": "online", 00:16:19.076 "raid_level": "raid1", 00:16:19.076 "superblock": true, 00:16:19.076 "num_base_bdevs": 2, 00:16:19.076 "num_base_bdevs_discovered": 2, 00:16:19.076 "num_base_bdevs_operational": 2, 00:16:19.076 "base_bdevs_list": [ 00:16:19.076 { 00:16:19.076 "name": "spare", 00:16:19.076 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:19.076 "is_configured": true, 00:16:19.076 "data_offset": 2048, 00:16:19.076 "data_size": 63488 00:16:19.076 }, 00:16:19.077 { 00:16:19.077 "name": "BaseBdev2", 00:16:19.077 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:19.077 "is_configured": true, 00:16:19.077 "data_offset": 2048, 00:16:19.077 "data_size": 63488 00:16:19.077 } 00:16:19.077 ] 00:16:19.077 }' 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:19.077 93.50 IOPS, 280.50 MiB/s 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.077 "name": "raid_bdev1", 00:16:19.077 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:19.077 "strip_size_kb": 0, 00:16:19.077 "state": "online", 00:16:19.077 "raid_level": "raid1", 00:16:19.077 "superblock": true, 00:16:19.077 "num_base_bdevs": 2, 00:16:19.077 "num_base_bdevs_discovered": 2, 00:16:19.077 "num_base_bdevs_operational": 2, 00:16:19.077 "base_bdevs_list": [ 00:16:19.077 { 00:16:19.077 "name": "spare", 00:16:19.077 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:19.077 "is_configured": true, 00:16:19.077 "data_offset": 2048, 00:16:19.077 "data_size": 63488 00:16:19.077 }, 00:16:19.077 { 00:16:19.077 "name": "BaseBdev2", 00:16:19.077 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:19.077 "is_configured": true, 00:16:19.077 "data_offset": 2048, 00:16:19.077 "data_size": 63488 00:16:19.077 } 00:16:19.077 ] 00:16:19.077 }' 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.077 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.336 22:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.336 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.336 "name": "raid_bdev1", 00:16:19.336 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:19.336 "strip_size_kb": 0, 00:16:19.336 "state": "online", 00:16:19.336 "raid_level": "raid1", 00:16:19.336 "superblock": true, 00:16:19.336 "num_base_bdevs": 2, 00:16:19.336 "num_base_bdevs_discovered": 2, 00:16:19.336 "num_base_bdevs_operational": 2, 00:16:19.336 "base_bdevs_list": [ 00:16:19.336 { 00:16:19.336 "name": "spare", 00:16:19.336 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:19.336 "is_configured": true, 00:16:19.336 "data_offset": 2048, 00:16:19.336 "data_size": 63488 00:16:19.336 }, 00:16:19.336 { 00:16:19.336 "name": "BaseBdev2", 00:16:19.336 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:19.336 "is_configured": true, 00:16:19.336 "data_offset": 2048, 00:16:19.336 "data_size": 63488 00:16:19.336 } 00:16:19.336 ] 00:16:19.336 }' 00:16:19.336 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.336 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.595 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.595 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.595 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.595 [2024-09-27 22:33:15.416894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.595 [2024-09-27 22:33:15.416936] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.853 00:16:19.854 Latency(us) 00:16:19.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.854 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:19.854 raid_bdev1 : 8.71 88.86 266.59 0.00 0.00 15924.81 322.42 115385.47 00:16:19.854 =================================================================================================================== 00:16:19.854 Total : 88.86 266.59 0.00 0.00 15924.81 322.42 115385.47 00:16:19.854 [2024-09-27 22:33:15.501498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.854 [2024-09-27 22:33:15.501569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.854 [2024-09-27 22:33:15.501655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.854 [2024-09-27 22:33:15.501672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:19.854 { 00:16:19.854 "results": [ 00:16:19.854 { 00:16:19.854 "job": "raid_bdev1", 00:16:19.854 "core_mask": "0x1", 00:16:19.854 "workload": "randrw", 00:16:19.854 "percentage": 50, 00:16:19.854 "status": "finished", 00:16:19.854 "queue_depth": 2, 00:16:19.854 "io_size": 3145728, 00:16:19.854 "runtime": 8.710069, 00:16:19.854 "iops": 88.86267146678172, 00:16:19.854 "mibps": 266.5880144003452, 00:16:19.854 "io_failed": 0, 00:16:19.854 "io_timeout": 0, 00:16:19.854 "avg_latency_us": 15924.807255896972, 00:16:19.854 "min_latency_us": 322.4160642570281, 00:16:19.854 "max_latency_us": 115385.47148594378 00:16:19.854 } 00:16:19.854 ], 00:16:19.854 "core_count": 1 00:16:19.854 } 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.854 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:20.125 /dev/nbd0 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.125 1+0 records in 00:16:20.125 1+0 records out 00:16:20.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489399 s, 8.4 MB/s 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.125 22:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:20.385 /dev/nbd1 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.385 1+0 records in 00:16:20.385 1+0 records out 00:16:20.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522422 s, 7.8 MB/s 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.385 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.644 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.903 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.163 [2024-09-27 22:33:16.954093] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.163 [2024-09-27 22:33:16.954343] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.163 [2024-09-27 22:33:16.954382] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:21.163 [2024-09-27 22:33:16.954398] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.163 [2024-09-27 22:33:16.957247] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.163 [2024-09-27 22:33:16.957305] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.163 [2024-09-27 22:33:16.957423] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:21.163 [2024-09-27 22:33:16.957486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.163 [2024-09-27 22:33:16.957650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.163 spare 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.163 22:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.423 [2024-09-27 22:33:17.057621] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:21.423 [2024-09-27 22:33:17.057891] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.423 [2024-09-27 22:33:17.058307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:21.423 [2024-09-27 22:33:17.058518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:21.423 [2024-09-27 22:33:17.058539] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:21.423 [2024-09-27 22:33:17.058806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.423 "name": "raid_bdev1", 00:16:21.423 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:21.423 "strip_size_kb": 0, 00:16:21.423 "state": "online", 00:16:21.423 "raid_level": "raid1", 00:16:21.423 "superblock": true, 00:16:21.423 "num_base_bdevs": 2, 00:16:21.423 "num_base_bdevs_discovered": 2, 00:16:21.423 "num_base_bdevs_operational": 2, 00:16:21.423 "base_bdevs_list": [ 00:16:21.423 { 00:16:21.423 "name": "spare", 00:16:21.423 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:21.423 "is_configured": true, 00:16:21.423 "data_offset": 2048, 00:16:21.423 "data_size": 63488 00:16:21.423 }, 00:16:21.423 { 00:16:21.423 "name": "BaseBdev2", 00:16:21.423 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:21.423 "is_configured": true, 00:16:21.423 "data_offset": 2048, 00:16:21.423 "data_size": 63488 00:16:21.423 } 00:16:21.423 ] 00:16:21.423 }' 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.423 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.682 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.942 "name": "raid_bdev1", 00:16:21.942 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:21.942 "strip_size_kb": 0, 00:16:21.942 "state": "online", 00:16:21.942 "raid_level": "raid1", 00:16:21.942 "superblock": true, 00:16:21.942 "num_base_bdevs": 2, 00:16:21.942 "num_base_bdevs_discovered": 2, 00:16:21.942 "num_base_bdevs_operational": 2, 00:16:21.942 "base_bdevs_list": [ 00:16:21.942 { 00:16:21.942 "name": "spare", 00:16:21.942 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:21.942 "is_configured": true, 00:16:21.942 "data_offset": 2048, 00:16:21.942 "data_size": 63488 00:16:21.942 }, 00:16:21.942 { 00:16:21.942 "name": "BaseBdev2", 00:16:21.942 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:21.942 "is_configured": true, 00:16:21.942 "data_offset": 2048, 00:16:21.942 "data_size": 63488 00:16:21.942 } 00:16:21.942 ] 00:16:21.942 }' 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.942 [2024-09-27 22:33:17.721952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.942 "name": "raid_bdev1", 00:16:21.942 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:21.942 "strip_size_kb": 0, 00:16:21.942 "state": "online", 00:16:21.942 "raid_level": "raid1", 00:16:21.942 "superblock": true, 00:16:21.942 "num_base_bdevs": 2, 00:16:21.942 "num_base_bdevs_discovered": 1, 00:16:21.942 "num_base_bdevs_operational": 1, 00:16:21.942 "base_bdevs_list": [ 00:16:21.942 { 00:16:21.942 "name": null, 00:16:21.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.942 "is_configured": false, 00:16:21.942 "data_offset": 0, 00:16:21.942 "data_size": 63488 00:16:21.942 }, 00:16:21.942 { 00:16:21.942 "name": "BaseBdev2", 00:16:21.942 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:21.942 "is_configured": true, 00:16:21.942 "data_offset": 2048, 00:16:21.942 "data_size": 63488 00:16:21.942 } 00:16:21.942 ] 00:16:21.942 }' 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.942 22:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.511 22:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.511 22:33:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.511 22:33:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.511 [2024-09-27 22:33:18.209353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.511 [2024-09-27 22:33:18.209786] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.511 [2024-09-27 22:33:18.209812] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:22.511 [2024-09-27 22:33:18.209865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.511 [2024-09-27 22:33:18.229280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:22.511 22:33:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.511 22:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:22.511 [2024-09-27 22:33:18.231705] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.448 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.449 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.449 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.449 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.449 "name": "raid_bdev1", 00:16:23.449 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:23.449 "strip_size_kb": 0, 00:16:23.449 "state": "online", 00:16:23.449 "raid_level": "raid1", 00:16:23.449 "superblock": true, 00:16:23.449 "num_base_bdevs": 2, 00:16:23.449 "num_base_bdevs_discovered": 2, 00:16:23.449 "num_base_bdevs_operational": 2, 00:16:23.449 "process": { 00:16:23.449 "type": "rebuild", 00:16:23.449 "target": "spare", 00:16:23.449 "progress": { 00:16:23.449 "blocks": 20480, 00:16:23.449 "percent": 32 00:16:23.449 } 00:16:23.449 }, 00:16:23.449 "base_bdevs_list": [ 00:16:23.449 { 00:16:23.449 "name": "spare", 00:16:23.449 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:23.449 "is_configured": true, 00:16:23.449 "data_offset": 2048, 00:16:23.449 "data_size": 63488 00:16:23.449 }, 00:16:23.449 { 00:16:23.449 "name": "BaseBdev2", 00:16:23.449 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:23.449 "is_configured": true, 00:16:23.449 "data_offset": 2048, 00:16:23.449 "data_size": 63488 00:16:23.449 } 00:16:23.449 ] 00:16:23.449 }' 00:16:23.449 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.708 [2024-09-27 22:33:19.388221] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.708 [2024-09-27 22:33:19.437729] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.708 [2024-09-27 22:33:19.437826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.708 [2024-09-27 22:33:19.437849] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.708 [2024-09-27 22:33:19.437860] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.708 "name": "raid_bdev1", 00:16:23.708 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:23.708 "strip_size_kb": 0, 00:16:23.708 "state": "online", 00:16:23.708 "raid_level": "raid1", 00:16:23.708 "superblock": true, 00:16:23.708 "num_base_bdevs": 2, 00:16:23.708 "num_base_bdevs_discovered": 1, 00:16:23.708 "num_base_bdevs_operational": 1, 00:16:23.708 "base_bdevs_list": [ 00:16:23.708 { 00:16:23.708 "name": null, 00:16:23.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.708 "is_configured": false, 00:16:23.708 "data_offset": 0, 00:16:23.708 "data_size": 63488 00:16:23.708 }, 00:16:23.708 { 00:16:23.708 "name": "BaseBdev2", 00:16:23.708 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:23.708 "is_configured": true, 00:16:23.708 "data_offset": 2048, 00:16:23.708 "data_size": 63488 00:16:23.708 } 00:16:23.708 ] 00:16:23.708 }' 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.708 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.275 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.275 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.275 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.275 [2024-09-27 22:33:19.932371] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.275 [2024-09-27 22:33:19.932450] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.275 [2024-09-27 22:33:19.932482] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:24.275 [2024-09-27 22:33:19.932495] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.275 [2024-09-27 22:33:19.933028] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.275 [2024-09-27 22:33:19.933064] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.275 [2024-09-27 22:33:19.933176] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:24.275 [2024-09-27 22:33:19.933191] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:24.275 [2024-09-27 22:33:19.933206] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:24.275 [2024-09-27 22:33:19.933235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.275 [2024-09-27 22:33:19.953060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:24.275 spare 00:16:24.275 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.275 22:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:24.275 [2024-09-27 22:33:19.955478] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.215 22:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.215 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.215 "name": "raid_bdev1", 00:16:25.215 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:25.215 "strip_size_kb": 0, 00:16:25.215 "state": "online", 00:16:25.215 "raid_level": "raid1", 00:16:25.215 "superblock": true, 00:16:25.215 "num_base_bdevs": 2, 00:16:25.215 "num_base_bdevs_discovered": 2, 00:16:25.215 "num_base_bdevs_operational": 2, 00:16:25.215 "process": { 00:16:25.215 "type": "rebuild", 00:16:25.215 "target": "spare", 00:16:25.215 "progress": { 00:16:25.215 "blocks": 20480, 00:16:25.215 "percent": 32 00:16:25.215 } 00:16:25.215 }, 00:16:25.215 "base_bdevs_list": [ 00:16:25.215 { 00:16:25.215 "name": "spare", 00:16:25.215 "uuid": "1f202087-3943-5e5e-a53f-21bef866c06c", 00:16:25.215 "is_configured": true, 00:16:25.215 "data_offset": 2048, 00:16:25.215 "data_size": 63488 00:16:25.215 }, 00:16:25.215 { 00:16:25.215 "name": "BaseBdev2", 00:16:25.215 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:25.215 "is_configured": true, 00:16:25.215 "data_offset": 2048, 00:16:25.215 "data_size": 63488 00:16:25.215 } 00:16:25.215 ] 00:16:25.215 }' 00:16:25.215 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.215 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.215 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.473 [2024-09-27 22:33:21.111359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.473 [2024-09-27 22:33:21.161491] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.473 [2024-09-27 22:33:21.161867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.473 [2024-09-27 22:33:21.161893] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.473 [2024-09-27 22:33:21.161909] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.473 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.474 "name": "raid_bdev1", 00:16:25.474 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:25.474 "strip_size_kb": 0, 00:16:25.474 "state": "online", 00:16:25.474 "raid_level": "raid1", 00:16:25.474 "superblock": true, 00:16:25.474 "num_base_bdevs": 2, 00:16:25.474 "num_base_bdevs_discovered": 1, 00:16:25.474 "num_base_bdevs_operational": 1, 00:16:25.474 "base_bdevs_list": [ 00:16:25.474 { 00:16:25.474 "name": null, 00:16:25.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.474 "is_configured": false, 00:16:25.474 "data_offset": 0, 00:16:25.474 "data_size": 63488 00:16:25.474 }, 00:16:25.474 { 00:16:25.474 "name": "BaseBdev2", 00:16:25.474 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:25.474 "is_configured": true, 00:16:25.474 "data_offset": 2048, 00:16:25.474 "data_size": 63488 00:16:25.474 } 00:16:25.474 ] 00:16:25.474 }' 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.474 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.040 "name": "raid_bdev1", 00:16:26.040 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:26.040 "strip_size_kb": 0, 00:16:26.040 "state": "online", 00:16:26.040 "raid_level": "raid1", 00:16:26.040 "superblock": true, 00:16:26.040 "num_base_bdevs": 2, 00:16:26.040 "num_base_bdevs_discovered": 1, 00:16:26.040 "num_base_bdevs_operational": 1, 00:16:26.040 "base_bdevs_list": [ 00:16:26.040 { 00:16:26.040 "name": null, 00:16:26.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.040 "is_configured": false, 00:16:26.040 "data_offset": 0, 00:16:26.040 "data_size": 63488 00:16:26.040 }, 00:16:26.040 { 00:16:26.040 "name": "BaseBdev2", 00:16:26.040 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:26.040 "is_configured": true, 00:16:26.040 "data_offset": 2048, 00:16:26.040 "data_size": 63488 00:16:26.040 } 00:16:26.040 ] 00:16:26.040 }' 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.040 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.040 [2024-09-27 22:33:21.836163] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:26.040 [2024-09-27 22:33:21.836412] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.040 [2024-09-27 22:33:21.836454] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:26.040 [2024-09-27 22:33:21.836472] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.040 [2024-09-27 22:33:21.836949] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.040 [2024-09-27 22:33:21.836992] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:26.040 [2024-09-27 22:33:21.837090] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:26.040 [2024-09-27 22:33:21.837110] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.040 [2024-09-27 22:33:21.837120] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.041 [2024-09-27 22:33:21.837143] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:26.041 BaseBdev1 00:16:26.041 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.041 22:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.234 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.234 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.234 "name": "raid_bdev1", 00:16:27.234 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:27.234 "strip_size_kb": 0, 00:16:27.234 "state": "online", 00:16:27.234 "raid_level": "raid1", 00:16:27.234 "superblock": true, 00:16:27.234 "num_base_bdevs": 2, 00:16:27.234 "num_base_bdevs_discovered": 1, 00:16:27.234 "num_base_bdevs_operational": 1, 00:16:27.234 "base_bdevs_list": [ 00:16:27.234 { 00:16:27.234 "name": null, 00:16:27.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.234 "is_configured": false, 00:16:27.234 "data_offset": 0, 00:16:27.234 "data_size": 63488 00:16:27.234 }, 00:16:27.234 { 00:16:27.234 "name": "BaseBdev2", 00:16:27.234 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:27.234 "is_configured": true, 00:16:27.234 "data_offset": 2048, 00:16:27.234 "data_size": 63488 00:16:27.234 } 00:16:27.234 ] 00:16:27.234 }' 00:16:27.234 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.234 22:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.493 "name": "raid_bdev1", 00:16:27.493 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:27.493 "strip_size_kb": 0, 00:16:27.493 "state": "online", 00:16:27.493 "raid_level": "raid1", 00:16:27.493 "superblock": true, 00:16:27.493 "num_base_bdevs": 2, 00:16:27.493 "num_base_bdevs_discovered": 1, 00:16:27.493 "num_base_bdevs_operational": 1, 00:16:27.493 "base_bdevs_list": [ 00:16:27.493 { 00:16:27.493 "name": null, 00:16:27.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.493 "is_configured": false, 00:16:27.493 "data_offset": 0, 00:16:27.493 "data_size": 63488 00:16:27.493 }, 00:16:27.493 { 00:16:27.493 "name": "BaseBdev2", 00:16:27.493 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:27.493 "is_configured": true, 00:16:27.493 "data_offset": 2048, 00:16:27.493 "data_size": 63488 00:16:27.493 } 00:16:27.493 ] 00:16:27.493 }' 00:16:27.493 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.752 [2024-09-27 22:33:23.435896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.752 [2024-09-27 22:33:23.436091] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:27.752 [2024-09-27 22:33:23.436108] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:27.752 request: 00:16:27.752 { 00:16:27.752 "base_bdev": "BaseBdev1", 00:16:27.752 "raid_bdev": "raid_bdev1", 00:16:27.752 "method": "bdev_raid_add_base_bdev", 00:16:27.752 "req_id": 1 00:16:27.752 } 00:16:27.752 Got JSON-RPC error response 00:16:27.752 response: 00:16:27.752 { 00:16:27.752 "code": -22, 00:16:27.752 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:27.752 } 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.752 22:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.689 "name": "raid_bdev1", 00:16:28.689 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:28.689 "strip_size_kb": 0, 00:16:28.689 "state": "online", 00:16:28.689 "raid_level": "raid1", 00:16:28.689 "superblock": true, 00:16:28.689 "num_base_bdevs": 2, 00:16:28.689 "num_base_bdevs_discovered": 1, 00:16:28.689 "num_base_bdevs_operational": 1, 00:16:28.689 "base_bdevs_list": [ 00:16:28.689 { 00:16:28.689 "name": null, 00:16:28.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.689 "is_configured": false, 00:16:28.689 "data_offset": 0, 00:16:28.689 "data_size": 63488 00:16:28.689 }, 00:16:28.689 { 00:16:28.689 "name": "BaseBdev2", 00:16:28.689 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:28.689 "is_configured": true, 00:16:28.689 "data_offset": 2048, 00:16:28.689 "data_size": 63488 00:16:28.689 } 00:16:28.689 ] 00:16:28.689 }' 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.689 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.282 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.282 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.282 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.282 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.282 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.283 "name": "raid_bdev1", 00:16:29.283 "uuid": "3832885b-781c-440b-a452-ad1ae4b8b86e", 00:16:29.283 "strip_size_kb": 0, 00:16:29.283 "state": "online", 00:16:29.283 "raid_level": "raid1", 00:16:29.283 "superblock": true, 00:16:29.283 "num_base_bdevs": 2, 00:16:29.283 "num_base_bdevs_discovered": 1, 00:16:29.283 "num_base_bdevs_operational": 1, 00:16:29.283 "base_bdevs_list": [ 00:16:29.283 { 00:16:29.283 "name": null, 00:16:29.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.283 "is_configured": false, 00:16:29.283 "data_offset": 0, 00:16:29.283 "data_size": 63488 00:16:29.283 }, 00:16:29.283 { 00:16:29.283 "name": "BaseBdev2", 00:16:29.283 "uuid": "ef02ee34-771d-5c21-b47d-451b7e646176", 00:16:29.283 "is_configured": true, 00:16:29.283 "data_offset": 2048, 00:16:29.283 "data_size": 63488 00:16:29.283 } 00:16:29.283 ] 00:16:29.283 }' 00:16:29.283 22:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77752 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77752 ']' 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77752 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77752 00:16:29.283 killing process with pid 77752 00:16:29.283 Received shutdown signal, test time was about 18.372256 seconds 00:16:29.283 00:16:29.283 Latency(us) 00:16:29.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.283 =================================================================================================================== 00:16:29.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77752' 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77752 00:16:29.283 [2024-09-27 22:33:25.123287] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.283 [2024-09-27 22:33:25.123434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.283 [2024-09-27 22:33:25.123499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.283 [2024-09-27 22:33:25.123511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:29.283 22:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77752 00:16:29.565 [2024-09-27 22:33:25.374989] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:32.095 00:16:32.095 real 0m22.957s 00:16:32.095 user 0m29.386s 00:16:32.095 sys 0m2.667s 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.095 ************************************ 00:16:32.095 END TEST raid_rebuild_test_sb_io 00:16:32.095 ************************************ 00:16:32.095 22:33:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:32.095 22:33:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:32.095 22:33:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:32.095 22:33:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.095 22:33:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.095 ************************************ 00:16:32.095 START TEST raid_rebuild_test 00:16:32.095 ************************************ 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78471 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78471 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 78471 ']' 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.095 22:33:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.095 [2024-09-27 22:33:27.767814] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:16:32.095 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:32.095 Zero copy mechanism will not be used. 00:16:32.095 [2024-09-27 22:33:27.768198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78471 ] 00:16:32.095 [2024-09-27 22:33:27.941588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.353 [2024-09-27 22:33:28.196029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.612 [2024-09-27 22:33:28.482752] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.612 [2024-09-27 22:33:28.482802] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.178 22:33:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.178 22:33:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:33.178 22:33:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.178 22:33:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:33.178 22:33:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.178 22:33:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.178 BaseBdev1_malloc 00:16:33.178 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.178 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:33.178 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.178 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.178 [2024-09-27 22:33:29.048665] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:33.178 [2024-09-27 22:33:29.049015] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.178 [2024-09-27 22:33:29.049066] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:33.178 [2024-09-27 22:33:29.049091] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.179 [2024-09-27 22:33:29.052187] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.179 [2024-09-27 22:33:29.052249] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:33.179 BaseBdev1 00:16:33.179 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.179 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.179 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:33.179 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.179 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.441 BaseBdev2_malloc 00:16:33.441 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.441 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:33.441 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.441 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.441 [2024-09-27 22:33:29.113121] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:33.441 [2024-09-27 22:33:29.113423] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.441 [2024-09-27 22:33:29.113476] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:33.441 [2024-09-27 22:33:29.113494] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.442 [2024-09-27 22:33:29.116233] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.442 [2024-09-27 22:33:29.116282] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:33.442 BaseBdev2 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 BaseBdev3_malloc 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 [2024-09-27 22:33:29.178069] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:33.442 [2024-09-27 22:33:29.178155] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.442 [2024-09-27 22:33:29.178185] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:33.442 [2024-09-27 22:33:29.178201] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.442 [2024-09-27 22:33:29.180998] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.442 [2024-09-27 22:33:29.181055] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:33.442 BaseBdev3 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 BaseBdev4_malloc 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 [2024-09-27 22:33:29.242581] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:33.442 [2024-09-27 22:33:29.242674] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.442 [2024-09-27 22:33:29.242701] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:33.442 [2024-09-27 22:33:29.242717] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.442 [2024-09-27 22:33:29.245501] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.442 [2024-09-27 22:33:29.245561] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:33.442 BaseBdev4 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 spare_malloc 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 spare_delay 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.700 [2024-09-27 22:33:29.319782] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:33.700 [2024-09-27 22:33:29.319878] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.700 [2024-09-27 22:33:29.319908] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:33.700 [2024-09-27 22:33:29.319924] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.700 [2024-09-27 22:33:29.322709] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.700 [2024-09-27 22:33:29.322770] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:33.700 spare 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.700 [2024-09-27 22:33:29.331856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.700 [2024-09-27 22:33:29.334411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.700 [2024-09-27 22:33:29.334499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.700 [2024-09-27 22:33:29.334560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:33.700 [2024-09-27 22:33:29.334664] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:33.700 [2024-09-27 22:33:29.334678] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:33.700 [2024-09-27 22:33:29.335035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:33.700 [2024-09-27 22:33:29.335231] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:33.700 [2024-09-27 22:33:29.335244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:33.700 [2024-09-27 22:33:29.335434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.700 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.701 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.701 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.701 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.701 "name": "raid_bdev1", 00:16:33.701 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:33.701 "strip_size_kb": 0, 00:16:33.701 "state": "online", 00:16:33.701 "raid_level": "raid1", 00:16:33.701 "superblock": false, 00:16:33.701 "num_base_bdevs": 4, 00:16:33.701 "num_base_bdevs_discovered": 4, 00:16:33.701 "num_base_bdevs_operational": 4, 00:16:33.701 "base_bdevs_list": [ 00:16:33.701 { 00:16:33.701 "name": "BaseBdev1", 00:16:33.701 "uuid": "81b04338-6294-552d-b8ea-d9ff22d200a7", 00:16:33.701 "is_configured": true, 00:16:33.701 "data_offset": 0, 00:16:33.701 "data_size": 65536 00:16:33.701 }, 00:16:33.701 { 00:16:33.701 "name": "BaseBdev2", 00:16:33.701 "uuid": "960e3878-f3a3-53de-a6ac-23429d9f3c70", 00:16:33.701 "is_configured": true, 00:16:33.701 "data_offset": 0, 00:16:33.701 "data_size": 65536 00:16:33.701 }, 00:16:33.701 { 00:16:33.701 "name": "BaseBdev3", 00:16:33.701 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:33.701 "is_configured": true, 00:16:33.701 "data_offset": 0, 00:16:33.701 "data_size": 65536 00:16:33.701 }, 00:16:33.701 { 00:16:33.701 "name": "BaseBdev4", 00:16:33.701 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:33.701 "is_configured": true, 00:16:33.701 "data_offset": 0, 00:16:33.701 "data_size": 65536 00:16:33.701 } 00:16:33.701 ] 00:16:33.701 }' 00:16:33.701 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.701 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:33.959 [2024-09-27 22:33:29.775540] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:33.959 22:33:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:34.217 22:33:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:34.475 [2024-09-27 22:33:30.110896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:34.475 /dev/nbd0 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.476 1+0 records in 00:16:34.476 1+0 records out 00:16:34.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541242 s, 7.6 MB/s 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:34.476 22:33:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:42.597 65536+0 records in 00:16:42.597 65536+0 records out 00:16:42.597 33554432 bytes (34 MB, 32 MiB) copied, 7.44622 s, 4.5 MB/s 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.597 [2024-09-27 22:33:37.875501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 [2024-09-27 22:33:37.891612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.597 "name": "raid_bdev1", 00:16:42.597 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:42.597 "strip_size_kb": 0, 00:16:42.597 "state": "online", 00:16:42.597 "raid_level": "raid1", 00:16:42.597 "superblock": false, 00:16:42.597 "num_base_bdevs": 4, 00:16:42.597 "num_base_bdevs_discovered": 3, 00:16:42.597 "num_base_bdevs_operational": 3, 00:16:42.597 "base_bdevs_list": [ 00:16:42.597 { 00:16:42.597 "name": null, 00:16:42.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.597 "is_configured": false, 00:16:42.597 "data_offset": 0, 00:16:42.597 "data_size": 65536 00:16:42.597 }, 00:16:42.597 { 00:16:42.597 "name": "BaseBdev2", 00:16:42.597 "uuid": "960e3878-f3a3-53de-a6ac-23429d9f3c70", 00:16:42.597 "is_configured": true, 00:16:42.597 "data_offset": 0, 00:16:42.597 "data_size": 65536 00:16:42.597 }, 00:16:42.597 { 00:16:42.597 "name": "BaseBdev3", 00:16:42.597 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:42.597 "is_configured": true, 00:16:42.597 "data_offset": 0, 00:16:42.597 "data_size": 65536 00:16:42.597 }, 00:16:42.597 { 00:16:42.597 "name": "BaseBdev4", 00:16:42.597 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:42.597 "is_configured": true, 00:16:42.597 "data_offset": 0, 00:16:42.597 "data_size": 65536 00:16:42.597 } 00:16:42.597 ] 00:16:42.597 }' 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.597 22:33:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 22:33:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.597 22:33:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.597 22:33:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 [2024-09-27 22:33:38.342967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.597 [2024-09-27 22:33:38.362134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:42.597 22:33:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.597 22:33:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:42.597 [2024-09-27 22:33:38.364597] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.532 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.532 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.532 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.532 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.533 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.533 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.533 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.533 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.533 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.533 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.791 "name": "raid_bdev1", 00:16:43.791 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:43.791 "strip_size_kb": 0, 00:16:43.791 "state": "online", 00:16:43.791 "raid_level": "raid1", 00:16:43.791 "superblock": false, 00:16:43.791 "num_base_bdevs": 4, 00:16:43.791 "num_base_bdevs_discovered": 4, 00:16:43.791 "num_base_bdevs_operational": 4, 00:16:43.791 "process": { 00:16:43.791 "type": "rebuild", 00:16:43.791 "target": "spare", 00:16:43.791 "progress": { 00:16:43.791 "blocks": 20480, 00:16:43.791 "percent": 31 00:16:43.791 } 00:16:43.791 }, 00:16:43.791 "base_bdevs_list": [ 00:16:43.791 { 00:16:43.791 "name": "spare", 00:16:43.791 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:43.791 "is_configured": true, 00:16:43.791 "data_offset": 0, 00:16:43.791 "data_size": 65536 00:16:43.791 }, 00:16:43.791 { 00:16:43.791 "name": "BaseBdev2", 00:16:43.791 "uuid": "960e3878-f3a3-53de-a6ac-23429d9f3c70", 00:16:43.791 "is_configured": true, 00:16:43.791 "data_offset": 0, 00:16:43.791 "data_size": 65536 00:16:43.791 }, 00:16:43.791 { 00:16:43.791 "name": "BaseBdev3", 00:16:43.791 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:43.791 "is_configured": true, 00:16:43.791 "data_offset": 0, 00:16:43.791 "data_size": 65536 00:16:43.791 }, 00:16:43.791 { 00:16:43.791 "name": "BaseBdev4", 00:16:43.791 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:43.791 "is_configured": true, 00:16:43.791 "data_offset": 0, 00:16:43.791 "data_size": 65536 00:16:43.791 } 00:16:43.791 ] 00:16:43.791 }' 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.791 [2024-09-27 22:33:39.515946] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.791 [2024-09-27 22:33:39.570619] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.791 [2024-09-27 22:33:39.571023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.791 [2024-09-27 22:33:39.571154] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.791 [2024-09-27 22:33:39.571273] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.791 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.791 "name": "raid_bdev1", 00:16:43.791 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:43.791 "strip_size_kb": 0, 00:16:43.791 "state": "online", 00:16:43.791 "raid_level": "raid1", 00:16:43.791 "superblock": false, 00:16:43.791 "num_base_bdevs": 4, 00:16:43.791 "num_base_bdevs_discovered": 3, 00:16:43.791 "num_base_bdevs_operational": 3, 00:16:43.791 "base_bdevs_list": [ 00:16:43.791 { 00:16:43.791 "name": null, 00:16:43.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.791 "is_configured": false, 00:16:43.791 "data_offset": 0, 00:16:43.791 "data_size": 65536 00:16:43.791 }, 00:16:43.791 { 00:16:43.791 "name": "BaseBdev2", 00:16:43.791 "uuid": "960e3878-f3a3-53de-a6ac-23429d9f3c70", 00:16:43.791 "is_configured": true, 00:16:43.791 "data_offset": 0, 00:16:43.791 "data_size": 65536 00:16:43.791 }, 00:16:43.791 { 00:16:43.791 "name": "BaseBdev3", 00:16:43.792 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:43.792 "is_configured": true, 00:16:43.792 "data_offset": 0, 00:16:43.792 "data_size": 65536 00:16:43.792 }, 00:16:43.792 { 00:16:43.792 "name": "BaseBdev4", 00:16:43.792 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:43.792 "is_configured": true, 00:16:43.792 "data_offset": 0, 00:16:43.792 "data_size": 65536 00:16:43.792 } 00:16:43.792 ] 00:16:43.792 }' 00:16:43.792 22:33:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.792 22:33:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.359 "name": "raid_bdev1", 00:16:44.359 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:44.359 "strip_size_kb": 0, 00:16:44.359 "state": "online", 00:16:44.359 "raid_level": "raid1", 00:16:44.359 "superblock": false, 00:16:44.359 "num_base_bdevs": 4, 00:16:44.359 "num_base_bdevs_discovered": 3, 00:16:44.359 "num_base_bdevs_operational": 3, 00:16:44.359 "base_bdevs_list": [ 00:16:44.359 { 00:16:44.359 "name": null, 00:16:44.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.359 "is_configured": false, 00:16:44.359 "data_offset": 0, 00:16:44.359 "data_size": 65536 00:16:44.359 }, 00:16:44.359 { 00:16:44.359 "name": "BaseBdev2", 00:16:44.359 "uuid": "960e3878-f3a3-53de-a6ac-23429d9f3c70", 00:16:44.359 "is_configured": true, 00:16:44.359 "data_offset": 0, 00:16:44.359 "data_size": 65536 00:16:44.359 }, 00:16:44.359 { 00:16:44.359 "name": "BaseBdev3", 00:16:44.359 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:44.359 "is_configured": true, 00:16:44.359 "data_offset": 0, 00:16:44.359 "data_size": 65536 00:16:44.359 }, 00:16:44.359 { 00:16:44.359 "name": "BaseBdev4", 00:16:44.359 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:44.359 "is_configured": true, 00:16:44.359 "data_offset": 0, 00:16:44.359 "data_size": 65536 00:16:44.359 } 00:16:44.359 ] 00:16:44.359 }' 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.359 [2024-09-27 22:33:40.191965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.359 [2024-09-27 22:33:40.210195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.359 22:33:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:44.359 [2024-09-27 22:33:40.212795] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.736 "name": "raid_bdev1", 00:16:45.736 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:45.736 "strip_size_kb": 0, 00:16:45.736 "state": "online", 00:16:45.736 "raid_level": "raid1", 00:16:45.736 "superblock": false, 00:16:45.736 "num_base_bdevs": 4, 00:16:45.736 "num_base_bdevs_discovered": 4, 00:16:45.736 "num_base_bdevs_operational": 4, 00:16:45.736 "process": { 00:16:45.736 "type": "rebuild", 00:16:45.736 "target": "spare", 00:16:45.736 "progress": { 00:16:45.736 "blocks": 20480, 00:16:45.736 "percent": 31 00:16:45.736 } 00:16:45.736 }, 00:16:45.736 "base_bdevs_list": [ 00:16:45.736 { 00:16:45.736 "name": "spare", 00:16:45.736 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:45.736 "is_configured": true, 00:16:45.736 "data_offset": 0, 00:16:45.736 "data_size": 65536 00:16:45.736 }, 00:16:45.736 { 00:16:45.736 "name": "BaseBdev2", 00:16:45.736 "uuid": "960e3878-f3a3-53de-a6ac-23429d9f3c70", 00:16:45.736 "is_configured": true, 00:16:45.736 "data_offset": 0, 00:16:45.736 "data_size": 65536 00:16:45.736 }, 00:16:45.736 { 00:16:45.736 "name": "BaseBdev3", 00:16:45.736 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:45.736 "is_configured": true, 00:16:45.736 "data_offset": 0, 00:16:45.736 "data_size": 65536 00:16:45.736 }, 00:16:45.736 { 00:16:45.736 "name": "BaseBdev4", 00:16:45.736 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:45.736 "is_configured": true, 00:16:45.736 "data_offset": 0, 00:16:45.736 "data_size": 65536 00:16:45.736 } 00:16:45.736 ] 00:16:45.736 }' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 [2024-09-27 22:33:41.375949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.736 [2024-09-27 22:33:41.418634] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.736 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.736 "name": "raid_bdev1", 00:16:45.736 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:45.736 "strip_size_kb": 0, 00:16:45.736 "state": "online", 00:16:45.736 "raid_level": "raid1", 00:16:45.736 "superblock": false, 00:16:45.736 "num_base_bdevs": 4, 00:16:45.736 "num_base_bdevs_discovered": 3, 00:16:45.736 "num_base_bdevs_operational": 3, 00:16:45.736 "process": { 00:16:45.736 "type": "rebuild", 00:16:45.736 "target": "spare", 00:16:45.736 "progress": { 00:16:45.736 "blocks": 24576, 00:16:45.736 "percent": 37 00:16:45.736 } 00:16:45.736 }, 00:16:45.736 "base_bdevs_list": [ 00:16:45.736 { 00:16:45.736 "name": "spare", 00:16:45.736 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:45.736 "is_configured": true, 00:16:45.736 "data_offset": 0, 00:16:45.736 "data_size": 65536 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": null, 00:16:45.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.737 "is_configured": false, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": "BaseBdev3", 00:16:45.737 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": "BaseBdev4", 00:16:45.737 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 } 00:16:45.737 ] 00:16:45.737 }' 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=525 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.737 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.737 "name": "raid_bdev1", 00:16:45.737 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:45.737 "strip_size_kb": 0, 00:16:45.737 "state": "online", 00:16:45.737 "raid_level": "raid1", 00:16:45.737 "superblock": false, 00:16:45.737 "num_base_bdevs": 4, 00:16:45.737 "num_base_bdevs_discovered": 3, 00:16:45.737 "num_base_bdevs_operational": 3, 00:16:45.737 "process": { 00:16:45.737 "type": "rebuild", 00:16:45.737 "target": "spare", 00:16:45.737 "progress": { 00:16:45.737 "blocks": 26624, 00:16:45.737 "percent": 40 00:16:45.737 } 00:16:45.737 }, 00:16:45.737 "base_bdevs_list": [ 00:16:45.737 { 00:16:45.737 "name": "spare", 00:16:45.737 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": null, 00:16:45.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.737 "is_configured": false, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": "BaseBdev3", 00:16:45.737 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": "BaseBdev4", 00:16:45.737 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 0, 00:16:45.737 "data_size": 65536 00:16:45.737 } 00:16:45.737 ] 00:16:45.737 }' 00:16:45.996 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.996 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.996 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.996 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.996 22:33:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.955 "name": "raid_bdev1", 00:16:46.955 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:46.955 "strip_size_kb": 0, 00:16:46.955 "state": "online", 00:16:46.955 "raid_level": "raid1", 00:16:46.955 "superblock": false, 00:16:46.955 "num_base_bdevs": 4, 00:16:46.955 "num_base_bdevs_discovered": 3, 00:16:46.955 "num_base_bdevs_operational": 3, 00:16:46.955 "process": { 00:16:46.955 "type": "rebuild", 00:16:46.955 "target": "spare", 00:16:46.955 "progress": { 00:16:46.955 "blocks": 49152, 00:16:46.955 "percent": 75 00:16:46.955 } 00:16:46.955 }, 00:16:46.955 "base_bdevs_list": [ 00:16:46.955 { 00:16:46.955 "name": "spare", 00:16:46.955 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:46.955 "is_configured": true, 00:16:46.955 "data_offset": 0, 00:16:46.955 "data_size": 65536 00:16:46.955 }, 00:16:46.955 { 00:16:46.955 "name": null, 00:16:46.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.955 "is_configured": false, 00:16:46.955 "data_offset": 0, 00:16:46.955 "data_size": 65536 00:16:46.955 }, 00:16:46.955 { 00:16:46.955 "name": "BaseBdev3", 00:16:46.955 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:46.955 "is_configured": true, 00:16:46.955 "data_offset": 0, 00:16:46.955 "data_size": 65536 00:16:46.955 }, 00:16:46.955 { 00:16:46.955 "name": "BaseBdev4", 00:16:46.955 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:46.955 "is_configured": true, 00:16:46.955 "data_offset": 0, 00:16:46.955 "data_size": 65536 00:16:46.955 } 00:16:46.955 ] 00:16:46.955 }' 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.955 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.213 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.213 22:33:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.779 [2024-09-27 22:33:43.429222] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:47.779 [2024-09-27 22:33:43.429324] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:47.779 [2024-09-27 22:33:43.429398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.037 "name": "raid_bdev1", 00:16:48.037 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:48.037 "strip_size_kb": 0, 00:16:48.037 "state": "online", 00:16:48.037 "raid_level": "raid1", 00:16:48.037 "superblock": false, 00:16:48.037 "num_base_bdevs": 4, 00:16:48.037 "num_base_bdevs_discovered": 3, 00:16:48.037 "num_base_bdevs_operational": 3, 00:16:48.037 "base_bdevs_list": [ 00:16:48.037 { 00:16:48.037 "name": "spare", 00:16:48.037 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:48.037 "is_configured": true, 00:16:48.037 "data_offset": 0, 00:16:48.037 "data_size": 65536 00:16:48.037 }, 00:16:48.037 { 00:16:48.037 "name": null, 00:16:48.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.037 "is_configured": false, 00:16:48.037 "data_offset": 0, 00:16:48.037 "data_size": 65536 00:16:48.037 }, 00:16:48.037 { 00:16:48.037 "name": "BaseBdev3", 00:16:48.037 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:48.037 "is_configured": true, 00:16:48.037 "data_offset": 0, 00:16:48.037 "data_size": 65536 00:16:48.037 }, 00:16:48.037 { 00:16:48.037 "name": "BaseBdev4", 00:16:48.037 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:48.037 "is_configured": true, 00:16:48.037 "data_offset": 0, 00:16:48.037 "data_size": 65536 00:16:48.037 } 00:16:48.037 ] 00:16:48.037 }' 00:16:48.037 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.294 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:48.294 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.295 22:33:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.295 "name": "raid_bdev1", 00:16:48.295 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:48.295 "strip_size_kb": 0, 00:16:48.295 "state": "online", 00:16:48.295 "raid_level": "raid1", 00:16:48.295 "superblock": false, 00:16:48.295 "num_base_bdevs": 4, 00:16:48.295 "num_base_bdevs_discovered": 3, 00:16:48.295 "num_base_bdevs_operational": 3, 00:16:48.295 "base_bdevs_list": [ 00:16:48.295 { 00:16:48.295 "name": "spare", 00:16:48.295 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:48.295 "is_configured": true, 00:16:48.295 "data_offset": 0, 00:16:48.295 "data_size": 65536 00:16:48.295 }, 00:16:48.295 { 00:16:48.295 "name": null, 00:16:48.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.295 "is_configured": false, 00:16:48.295 "data_offset": 0, 00:16:48.295 "data_size": 65536 00:16:48.295 }, 00:16:48.295 { 00:16:48.295 "name": "BaseBdev3", 00:16:48.295 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:48.295 "is_configured": true, 00:16:48.295 "data_offset": 0, 00:16:48.295 "data_size": 65536 00:16:48.295 }, 00:16:48.295 { 00:16:48.295 "name": "BaseBdev4", 00:16:48.295 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:48.295 "is_configured": true, 00:16:48.295 "data_offset": 0, 00:16:48.295 "data_size": 65536 00:16:48.295 } 00:16:48.295 ] 00:16:48.295 }' 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.295 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.553 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.553 "name": "raid_bdev1", 00:16:48.553 "uuid": "183b2312-8242-4cb7-a410-e5defadcf75f", 00:16:48.553 "strip_size_kb": 0, 00:16:48.553 "state": "online", 00:16:48.553 "raid_level": "raid1", 00:16:48.553 "superblock": false, 00:16:48.553 "num_base_bdevs": 4, 00:16:48.553 "num_base_bdevs_discovered": 3, 00:16:48.553 "num_base_bdevs_operational": 3, 00:16:48.553 "base_bdevs_list": [ 00:16:48.553 { 00:16:48.553 "name": "spare", 00:16:48.553 "uuid": "41ae1d28-15bd-5433-a5b7-69abf9fba9a3", 00:16:48.553 "is_configured": true, 00:16:48.553 "data_offset": 0, 00:16:48.553 "data_size": 65536 00:16:48.553 }, 00:16:48.553 { 00:16:48.553 "name": null, 00:16:48.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.553 "is_configured": false, 00:16:48.553 "data_offset": 0, 00:16:48.553 "data_size": 65536 00:16:48.553 }, 00:16:48.553 { 00:16:48.553 "name": "BaseBdev3", 00:16:48.553 "uuid": "0dc3da8b-e962-5d30-b2a6-1f267ad476cc", 00:16:48.553 "is_configured": true, 00:16:48.553 "data_offset": 0, 00:16:48.553 "data_size": 65536 00:16:48.553 }, 00:16:48.553 { 00:16:48.553 "name": "BaseBdev4", 00:16:48.553 "uuid": "a541cdfb-bb0e-5a75-a5b8-331b45f97f7f", 00:16:48.553 "is_configured": true, 00:16:48.553 "data_offset": 0, 00:16:48.553 "data_size": 65536 00:16:48.553 } 00:16:48.553 ] 00:16:48.553 }' 00:16:48.553 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.553 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.810 [2024-09-27 22:33:44.604514] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.810 [2024-09-27 22:33:44.604558] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.810 [2024-09-27 22:33:44.604648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.810 [2024-09-27 22:33:44.604743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.810 [2024-09-27 22:33:44.604756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:48.810 22:33:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:48.811 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:49.068 /dev/nbd0 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.068 1+0 records in 00:16:49.068 1+0 records out 00:16:49.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032919 s, 12.4 MB/s 00:16:49.068 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.327 22:33:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:49.327 /dev/nbd1 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.585 1+0 records in 00:16:49.585 1+0 records out 00:16:49.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420339 s, 9.7 MB/s 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.585 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.845 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78471 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 78471 ']' 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 78471 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.124 22:33:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78471 00:16:50.384 22:33:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.384 killing process with pid 78471 00:16:50.384 Received shutdown signal, test time was about 60.000000 seconds 00:16:50.384 00:16:50.384 Latency(us) 00:16:50.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.384 =================================================================================================================== 00:16:50.384 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.384 22:33:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.384 22:33:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78471' 00:16:50.384 22:33:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 78471 00:16:50.384 [2024-09-27 22:33:46.008914] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.384 22:33:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 78471 00:16:50.952 [2024-09-27 22:33:46.561585] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.852 22:33:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:52.852 00:16:52.852 real 0m21.061s 00:16:52.852 user 0m22.634s 00:16:52.852 sys 0m4.510s 00:16:52.852 22:33:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.852 ************************************ 00:16:52.852 END TEST raid_rebuild_test 00:16:52.852 ************************************ 00:16:52.852 22:33:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 22:33:48 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:53.110 22:33:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:53.110 22:33:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.110 22:33:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 ************************************ 00:16:53.110 START TEST raid_rebuild_test_sb 00:16:53.110 ************************************ 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78951 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78951 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78951 ']' 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.110 22:33:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.110 [2024-09-27 22:33:48.902901] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:16:53.110 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:53.110 Zero copy mechanism will not be used. 00:16:53.110 [2024-09-27 22:33:48.903225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78951 ] 00:16:53.368 [2024-09-27 22:33:49.077372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.625 [2024-09-27 22:33:49.331774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.883 [2024-09-27 22:33:49.589804] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.883 [2024-09-27 22:33:49.590105] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.448 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 BaseBdev1_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 [2024-09-27 22:33:50.156581] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:54.449 [2024-09-27 22:33:50.156676] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.449 [2024-09-27 22:33:50.156710] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:54.449 [2024-09-27 22:33:50.156734] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.449 [2024-09-27 22:33:50.159571] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.449 [2024-09-27 22:33:50.159633] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:54.449 BaseBdev1 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 BaseBdev2_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 [2024-09-27 22:33:50.223504] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:54.449 [2024-09-27 22:33:50.223598] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.449 [2024-09-27 22:33:50.223632] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:54.449 [2024-09-27 22:33:50.223648] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.449 [2024-09-27 22:33:50.226424] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.449 [2024-09-27 22:33:50.226485] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:54.449 BaseBdev2 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 BaseBdev3_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.449 [2024-09-27 22:33:50.288869] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:54.449 [2024-09-27 22:33:50.288955] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.449 [2024-09-27 22:33:50.289003] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:54.449 [2024-09-27 22:33:50.289020] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.449 [2024-09-27 22:33:50.291727] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.449 [2024-09-27 22:33:50.292005] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:54.449 BaseBdev3 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.449 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 BaseBdev4_malloc 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 [2024-09-27 22:33:50.355305] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:54.708 [2024-09-27 22:33:50.355489] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.708 [2024-09-27 22:33:50.355570] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:54.708 [2024-09-27 22:33:50.355625] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.708 [2024-09-27 22:33:50.363270] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.708 [2024-09-27 22:33:50.363440] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:54.708 BaseBdev4 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 spare_malloc 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 spare_delay 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 [2024-09-27 22:33:50.441350] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.708 [2024-09-27 22:33:50.441442] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.708 [2024-09-27 22:33:50.441472] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:54.708 [2024-09-27 22:33:50.441488] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.708 [2024-09-27 22:33:50.444222] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.708 [2024-09-27 22:33:50.444274] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.708 spare 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 [2024-09-27 22:33:50.453394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.708 [2024-09-27 22:33:50.455872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.708 [2024-09-27 22:33:50.455957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.708 [2024-09-27 22:33:50.456040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.708 [2024-09-27 22:33:50.456264] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:54.708 [2024-09-27 22:33:50.456281] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.708 [2024-09-27 22:33:50.456610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:54.708 [2024-09-27 22:33:50.456809] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:54.708 [2024-09-27 22:33:50.456821] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:54.708 [2024-09-27 22:33:50.457188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.708 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.708 "name": "raid_bdev1", 00:16:54.708 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:16:54.708 "strip_size_kb": 0, 00:16:54.708 "state": "online", 00:16:54.708 "raid_level": "raid1", 00:16:54.708 "superblock": true, 00:16:54.708 "num_base_bdevs": 4, 00:16:54.708 "num_base_bdevs_discovered": 4, 00:16:54.708 "num_base_bdevs_operational": 4, 00:16:54.708 "base_bdevs_list": [ 00:16:54.709 { 00:16:54.709 "name": "BaseBdev1", 00:16:54.709 "uuid": "b9c52b13-de94-565d-868c-b365e1172b62", 00:16:54.709 "is_configured": true, 00:16:54.709 "data_offset": 2048, 00:16:54.709 "data_size": 63488 00:16:54.709 }, 00:16:54.709 { 00:16:54.709 "name": "BaseBdev2", 00:16:54.709 "uuid": "5c7402ad-ef4d-5095-a17c-ac39824b6366", 00:16:54.709 "is_configured": true, 00:16:54.709 "data_offset": 2048, 00:16:54.709 "data_size": 63488 00:16:54.709 }, 00:16:54.709 { 00:16:54.709 "name": "BaseBdev3", 00:16:54.709 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:16:54.709 "is_configured": true, 00:16:54.709 "data_offset": 2048, 00:16:54.709 "data_size": 63488 00:16:54.709 }, 00:16:54.709 { 00:16:54.709 "name": "BaseBdev4", 00:16:54.709 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:16:54.709 "is_configured": true, 00:16:54.709 "data_offset": 2048, 00:16:54.709 "data_size": 63488 00:16:54.709 } 00:16:54.709 ] 00:16:54.709 }' 00:16:54.709 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.709 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:55.275 [2024-09-27 22:33:50.909496] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:55.275 22:33:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:55.275 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.275 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:55.533 [2024-09-27 22:33:51.221088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:55.533 /dev/nbd0 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:55.533 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.534 1+0 records in 00:16:55.534 1+0 records out 00:16:55.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541258 s, 7.6 MB/s 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:55.534 22:33:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:03.644 63488+0 records in 00:17:03.644 63488+0 records out 00:17:03.644 32505856 bytes (33 MB, 31 MiB) copied, 7.22854 s, 4.5 MB/s 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.644 [2024-09-27 22:33:58.765841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.644 [2024-09-27 22:33:58.801924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.644 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.645 "name": "raid_bdev1", 00:17:03.645 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:03.645 "strip_size_kb": 0, 00:17:03.645 "state": "online", 00:17:03.645 "raid_level": "raid1", 00:17:03.645 "superblock": true, 00:17:03.645 "num_base_bdevs": 4, 00:17:03.645 "num_base_bdevs_discovered": 3, 00:17:03.645 "num_base_bdevs_operational": 3, 00:17:03.645 "base_bdevs_list": [ 00:17:03.645 { 00:17:03.645 "name": null, 00:17:03.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.645 "is_configured": false, 00:17:03.645 "data_offset": 0, 00:17:03.645 "data_size": 63488 00:17:03.645 }, 00:17:03.645 { 00:17:03.645 "name": "BaseBdev2", 00:17:03.645 "uuid": "5c7402ad-ef4d-5095-a17c-ac39824b6366", 00:17:03.645 "is_configured": true, 00:17:03.645 "data_offset": 2048, 00:17:03.645 "data_size": 63488 00:17:03.645 }, 00:17:03.645 { 00:17:03.645 "name": "BaseBdev3", 00:17:03.645 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:03.645 "is_configured": true, 00:17:03.645 "data_offset": 2048, 00:17:03.645 "data_size": 63488 00:17:03.645 }, 00:17:03.645 { 00:17:03.645 "name": "BaseBdev4", 00:17:03.645 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:03.645 "is_configured": true, 00:17:03.645 "data_offset": 2048, 00:17:03.645 "data_size": 63488 00:17:03.645 } 00:17:03.645 ] 00:17:03.645 }' 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.645 22:33:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.645 22:33:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.645 22:33:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.645 22:33:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.645 [2024-09-27 22:33:59.273245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.645 [2024-09-27 22:33:59.293253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:03.645 22:33:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.645 22:33:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:03.645 [2024-09-27 22:33:59.295884] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.580 "name": "raid_bdev1", 00:17:04.580 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:04.580 "strip_size_kb": 0, 00:17:04.580 "state": "online", 00:17:04.580 "raid_level": "raid1", 00:17:04.580 "superblock": true, 00:17:04.580 "num_base_bdevs": 4, 00:17:04.580 "num_base_bdevs_discovered": 4, 00:17:04.580 "num_base_bdevs_operational": 4, 00:17:04.580 "process": { 00:17:04.580 "type": "rebuild", 00:17:04.580 "target": "spare", 00:17:04.580 "progress": { 00:17:04.580 "blocks": 20480, 00:17:04.580 "percent": 32 00:17:04.580 } 00:17:04.580 }, 00:17:04.580 "base_bdevs_list": [ 00:17:04.580 { 00:17:04.580 "name": "spare", 00:17:04.580 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:04.580 "is_configured": true, 00:17:04.580 "data_offset": 2048, 00:17:04.580 "data_size": 63488 00:17:04.580 }, 00:17:04.580 { 00:17:04.580 "name": "BaseBdev2", 00:17:04.580 "uuid": "5c7402ad-ef4d-5095-a17c-ac39824b6366", 00:17:04.580 "is_configured": true, 00:17:04.580 "data_offset": 2048, 00:17:04.580 "data_size": 63488 00:17:04.580 }, 00:17:04.580 { 00:17:04.580 "name": "BaseBdev3", 00:17:04.580 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:04.580 "is_configured": true, 00:17:04.580 "data_offset": 2048, 00:17:04.580 "data_size": 63488 00:17:04.580 }, 00:17:04.580 { 00:17:04.580 "name": "BaseBdev4", 00:17:04.580 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:04.580 "is_configured": true, 00:17:04.580 "data_offset": 2048, 00:17:04.580 "data_size": 63488 00:17:04.580 } 00:17:04.580 ] 00:17:04.580 }' 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.580 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.580 [2024-09-27 22:34:00.444020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.839 [2024-09-27 22:34:00.502280] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.839 [2024-09-27 22:34:00.502379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.839 [2024-09-27 22:34:00.502401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.839 [2024-09-27 22:34:00.502414] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.839 "name": "raid_bdev1", 00:17:04.839 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:04.839 "strip_size_kb": 0, 00:17:04.839 "state": "online", 00:17:04.839 "raid_level": "raid1", 00:17:04.839 "superblock": true, 00:17:04.839 "num_base_bdevs": 4, 00:17:04.839 "num_base_bdevs_discovered": 3, 00:17:04.839 "num_base_bdevs_operational": 3, 00:17:04.839 "base_bdevs_list": [ 00:17:04.839 { 00:17:04.839 "name": null, 00:17:04.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.839 "is_configured": false, 00:17:04.839 "data_offset": 0, 00:17:04.839 "data_size": 63488 00:17:04.839 }, 00:17:04.839 { 00:17:04.839 "name": "BaseBdev2", 00:17:04.839 "uuid": "5c7402ad-ef4d-5095-a17c-ac39824b6366", 00:17:04.839 "is_configured": true, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 }, 00:17:04.839 { 00:17:04.839 "name": "BaseBdev3", 00:17:04.839 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:04.839 "is_configured": true, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 }, 00:17:04.839 { 00:17:04.839 "name": "BaseBdev4", 00:17:04.839 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:04.839 "is_configured": true, 00:17:04.839 "data_offset": 2048, 00:17:04.839 "data_size": 63488 00:17:04.839 } 00:17:04.839 ] 00:17:04.839 }' 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.839 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.406 22:34:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.406 "name": "raid_bdev1", 00:17:05.406 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:05.406 "strip_size_kb": 0, 00:17:05.406 "state": "online", 00:17:05.406 "raid_level": "raid1", 00:17:05.406 "superblock": true, 00:17:05.406 "num_base_bdevs": 4, 00:17:05.406 "num_base_bdevs_discovered": 3, 00:17:05.406 "num_base_bdevs_operational": 3, 00:17:05.406 "base_bdevs_list": [ 00:17:05.406 { 00:17:05.406 "name": null, 00:17:05.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.406 "is_configured": false, 00:17:05.406 "data_offset": 0, 00:17:05.406 "data_size": 63488 00:17:05.406 }, 00:17:05.406 { 00:17:05.406 "name": "BaseBdev2", 00:17:05.406 "uuid": "5c7402ad-ef4d-5095-a17c-ac39824b6366", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 }, 00:17:05.406 { 00:17:05.406 "name": "BaseBdev3", 00:17:05.406 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 }, 00:17:05.406 { 00:17:05.406 "name": "BaseBdev4", 00:17:05.406 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 } 00:17:05.406 ] 00:17:05.406 }' 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.406 [2024-09-27 22:34:01.145397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.406 [2024-09-27 22:34:01.165044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.406 22:34:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:05.406 [2024-09-27 22:34:01.167499] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.342 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.600 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.600 "name": "raid_bdev1", 00:17:06.600 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:06.600 "strip_size_kb": 0, 00:17:06.600 "state": "online", 00:17:06.600 "raid_level": "raid1", 00:17:06.600 "superblock": true, 00:17:06.600 "num_base_bdevs": 4, 00:17:06.600 "num_base_bdevs_discovered": 4, 00:17:06.600 "num_base_bdevs_operational": 4, 00:17:06.600 "process": { 00:17:06.600 "type": "rebuild", 00:17:06.600 "target": "spare", 00:17:06.600 "progress": { 00:17:06.600 "blocks": 20480, 00:17:06.600 "percent": 32 00:17:06.600 } 00:17:06.600 }, 00:17:06.600 "base_bdevs_list": [ 00:17:06.600 { 00:17:06.600 "name": "spare", 00:17:06.600 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:06.600 "is_configured": true, 00:17:06.600 "data_offset": 2048, 00:17:06.600 "data_size": 63488 00:17:06.600 }, 00:17:06.600 { 00:17:06.601 "name": "BaseBdev2", 00:17:06.601 "uuid": "5c7402ad-ef4d-5095-a17c-ac39824b6366", 00:17:06.601 "is_configured": true, 00:17:06.601 "data_offset": 2048, 00:17:06.601 "data_size": 63488 00:17:06.601 }, 00:17:06.601 { 00:17:06.601 "name": "BaseBdev3", 00:17:06.601 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:06.601 "is_configured": true, 00:17:06.601 "data_offset": 2048, 00:17:06.601 "data_size": 63488 00:17:06.601 }, 00:17:06.601 { 00:17:06.601 "name": "BaseBdev4", 00:17:06.601 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:06.601 "is_configured": true, 00:17:06.601 "data_offset": 2048, 00:17:06.601 "data_size": 63488 00:17:06.601 } 00:17:06.601 ] 00:17:06.601 }' 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:06.601 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.601 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.601 [2024-09-27 22:34:02.312020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:06.601 [2024-09-27 22:34:02.473527] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.858 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.858 "name": "raid_bdev1", 00:17:06.858 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:06.858 "strip_size_kb": 0, 00:17:06.858 "state": "online", 00:17:06.858 "raid_level": "raid1", 00:17:06.858 "superblock": true, 00:17:06.858 "num_base_bdevs": 4, 00:17:06.858 "num_base_bdevs_discovered": 3, 00:17:06.858 "num_base_bdevs_operational": 3, 00:17:06.858 "process": { 00:17:06.858 "type": "rebuild", 00:17:06.858 "target": "spare", 00:17:06.858 "progress": { 00:17:06.858 "blocks": 24576, 00:17:06.858 "percent": 38 00:17:06.858 } 00:17:06.858 }, 00:17:06.858 "base_bdevs_list": [ 00:17:06.858 { 00:17:06.858 "name": "spare", 00:17:06.858 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:06.858 "is_configured": true, 00:17:06.858 "data_offset": 2048, 00:17:06.858 "data_size": 63488 00:17:06.858 }, 00:17:06.859 { 00:17:06.859 "name": null, 00:17:06.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.859 "is_configured": false, 00:17:06.859 "data_offset": 0, 00:17:06.859 "data_size": 63488 00:17:06.859 }, 00:17:06.859 { 00:17:06.859 "name": "BaseBdev3", 00:17:06.859 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:06.859 "is_configured": true, 00:17:06.859 "data_offset": 2048, 00:17:06.859 "data_size": 63488 00:17:06.859 }, 00:17:06.859 { 00:17:06.859 "name": "BaseBdev4", 00:17:06.859 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:06.859 "is_configured": true, 00:17:06.859 "data_offset": 2048, 00:17:06.859 "data_size": 63488 00:17:06.859 } 00:17:06.859 ] 00:17:06.859 }' 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=546 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.859 "name": "raid_bdev1", 00:17:06.859 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:06.859 "strip_size_kb": 0, 00:17:06.859 "state": "online", 00:17:06.859 "raid_level": "raid1", 00:17:06.859 "superblock": true, 00:17:06.859 "num_base_bdevs": 4, 00:17:06.859 "num_base_bdevs_discovered": 3, 00:17:06.859 "num_base_bdevs_operational": 3, 00:17:06.859 "process": { 00:17:06.859 "type": "rebuild", 00:17:06.859 "target": "spare", 00:17:06.859 "progress": { 00:17:06.859 "blocks": 26624, 00:17:06.859 "percent": 41 00:17:06.859 } 00:17:06.859 }, 00:17:06.859 "base_bdevs_list": [ 00:17:06.859 { 00:17:06.859 "name": "spare", 00:17:06.859 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:06.859 "is_configured": true, 00:17:06.859 "data_offset": 2048, 00:17:06.859 "data_size": 63488 00:17:06.859 }, 00:17:06.859 { 00:17:06.859 "name": null, 00:17:06.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.859 "is_configured": false, 00:17:06.859 "data_offset": 0, 00:17:06.859 "data_size": 63488 00:17:06.859 }, 00:17:06.859 { 00:17:06.859 "name": "BaseBdev3", 00:17:06.859 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:06.859 "is_configured": true, 00:17:06.859 "data_offset": 2048, 00:17:06.859 "data_size": 63488 00:17:06.859 }, 00:17:06.859 { 00:17:06.859 "name": "BaseBdev4", 00:17:06.859 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:06.859 "is_configured": true, 00:17:06.859 "data_offset": 2048, 00:17:06.859 "data_size": 63488 00:17:06.859 } 00:17:06.859 ] 00:17:06.859 }' 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.859 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.116 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.116 22:34:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.048 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.048 "name": "raid_bdev1", 00:17:08.048 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:08.048 "strip_size_kb": 0, 00:17:08.048 "state": "online", 00:17:08.048 "raid_level": "raid1", 00:17:08.048 "superblock": true, 00:17:08.048 "num_base_bdevs": 4, 00:17:08.048 "num_base_bdevs_discovered": 3, 00:17:08.048 "num_base_bdevs_operational": 3, 00:17:08.048 "process": { 00:17:08.048 "type": "rebuild", 00:17:08.048 "target": "spare", 00:17:08.048 "progress": { 00:17:08.049 "blocks": 51200, 00:17:08.049 "percent": 80 00:17:08.049 } 00:17:08.049 }, 00:17:08.049 "base_bdevs_list": [ 00:17:08.049 { 00:17:08.049 "name": "spare", 00:17:08.049 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:08.049 "is_configured": true, 00:17:08.049 "data_offset": 2048, 00:17:08.049 "data_size": 63488 00:17:08.049 }, 00:17:08.049 { 00:17:08.049 "name": null, 00:17:08.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.049 "is_configured": false, 00:17:08.049 "data_offset": 0, 00:17:08.049 "data_size": 63488 00:17:08.049 }, 00:17:08.049 { 00:17:08.049 "name": "BaseBdev3", 00:17:08.049 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:08.049 "is_configured": true, 00:17:08.049 "data_offset": 2048, 00:17:08.049 "data_size": 63488 00:17:08.049 }, 00:17:08.049 { 00:17:08.049 "name": "BaseBdev4", 00:17:08.049 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:08.049 "is_configured": true, 00:17:08.049 "data_offset": 2048, 00:17:08.049 "data_size": 63488 00:17:08.049 } 00:17:08.049 ] 00:17:08.049 }' 00:17:08.049 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.049 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.049 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.349 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.349 22:34:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.607 [2024-09-27 22:34:04.383193] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:08.607 [2024-09-27 22:34:04.383288] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:08.607 [2024-09-27 22:34:04.383441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.172 "name": "raid_bdev1", 00:17:09.172 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:09.172 "strip_size_kb": 0, 00:17:09.172 "state": "online", 00:17:09.172 "raid_level": "raid1", 00:17:09.172 "superblock": true, 00:17:09.172 "num_base_bdevs": 4, 00:17:09.172 "num_base_bdevs_discovered": 3, 00:17:09.172 "num_base_bdevs_operational": 3, 00:17:09.172 "base_bdevs_list": [ 00:17:09.172 { 00:17:09.172 "name": "spare", 00:17:09.172 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:09.172 "is_configured": true, 00:17:09.172 "data_offset": 2048, 00:17:09.172 "data_size": 63488 00:17:09.172 }, 00:17:09.172 { 00:17:09.172 "name": null, 00:17:09.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.172 "is_configured": false, 00:17:09.172 "data_offset": 0, 00:17:09.172 "data_size": 63488 00:17:09.172 }, 00:17:09.172 { 00:17:09.172 "name": "BaseBdev3", 00:17:09.172 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:09.172 "is_configured": true, 00:17:09.172 "data_offset": 2048, 00:17:09.172 "data_size": 63488 00:17:09.172 }, 00:17:09.172 { 00:17:09.172 "name": "BaseBdev4", 00:17:09.172 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:09.172 "is_configured": true, 00:17:09.172 "data_offset": 2048, 00:17:09.172 "data_size": 63488 00:17:09.172 } 00:17:09.172 ] 00:17:09.172 }' 00:17:09.172 22:34:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.172 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:09.172 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.430 "name": "raid_bdev1", 00:17:09.430 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:09.430 "strip_size_kb": 0, 00:17:09.430 "state": "online", 00:17:09.430 "raid_level": "raid1", 00:17:09.430 "superblock": true, 00:17:09.430 "num_base_bdevs": 4, 00:17:09.430 "num_base_bdevs_discovered": 3, 00:17:09.430 "num_base_bdevs_operational": 3, 00:17:09.430 "base_bdevs_list": [ 00:17:09.430 { 00:17:09.430 "name": "spare", 00:17:09.430 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:09.430 "is_configured": true, 00:17:09.430 "data_offset": 2048, 00:17:09.430 "data_size": 63488 00:17:09.430 }, 00:17:09.430 { 00:17:09.430 "name": null, 00:17:09.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.430 "is_configured": false, 00:17:09.430 "data_offset": 0, 00:17:09.430 "data_size": 63488 00:17:09.430 }, 00:17:09.430 { 00:17:09.430 "name": "BaseBdev3", 00:17:09.430 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:09.430 "is_configured": true, 00:17:09.430 "data_offset": 2048, 00:17:09.430 "data_size": 63488 00:17:09.430 }, 00:17:09.430 { 00:17:09.430 "name": "BaseBdev4", 00:17:09.430 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:09.430 "is_configured": true, 00:17:09.430 "data_offset": 2048, 00:17:09.430 "data_size": 63488 00:17:09.430 } 00:17:09.430 ] 00:17:09.430 }' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.430 "name": "raid_bdev1", 00:17:09.430 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:09.430 "strip_size_kb": 0, 00:17:09.430 "state": "online", 00:17:09.430 "raid_level": "raid1", 00:17:09.430 "superblock": true, 00:17:09.430 "num_base_bdevs": 4, 00:17:09.430 "num_base_bdevs_discovered": 3, 00:17:09.430 "num_base_bdevs_operational": 3, 00:17:09.430 "base_bdevs_list": [ 00:17:09.430 { 00:17:09.430 "name": "spare", 00:17:09.430 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:09.430 "is_configured": true, 00:17:09.430 "data_offset": 2048, 00:17:09.430 "data_size": 63488 00:17:09.430 }, 00:17:09.430 { 00:17:09.430 "name": null, 00:17:09.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.430 "is_configured": false, 00:17:09.430 "data_offset": 0, 00:17:09.430 "data_size": 63488 00:17:09.430 }, 00:17:09.430 { 00:17:09.430 "name": "BaseBdev3", 00:17:09.430 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:09.430 "is_configured": true, 00:17:09.430 "data_offset": 2048, 00:17:09.430 "data_size": 63488 00:17:09.430 }, 00:17:09.430 { 00:17:09.430 "name": "BaseBdev4", 00:17:09.430 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:09.430 "is_configured": true, 00:17:09.430 "data_offset": 2048, 00:17:09.430 "data_size": 63488 00:17:09.430 } 00:17:09.430 ] 00:17:09.430 }' 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.430 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.995 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.995 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.995 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.996 [2024-09-27 22:34:05.683987] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.996 [2024-09-27 22:34:05.684033] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.996 [2024-09-27 22:34:05.684128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.996 [2024-09-27 22:34:05.684226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.996 [2024-09-27 22:34:05.684240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:09.996 22:34:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:10.559 /dev/nbd0 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.559 1+0 records in 00:17:10.559 1+0 records out 00:17:10.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596922 s, 6.9 MB/s 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.559 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:10.560 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:10.817 /dev/nbd1 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.817 1+0 records in 00:17:10.817 1+0 records out 00:17:10.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361064 s, 11.3 MB/s 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.817 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.074 22:34:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:11.332 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.616 [2024-09-27 22:34:07.240515] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.616 [2024-09-27 22:34:07.240594] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.616 [2024-09-27 22:34:07.240633] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:11.616 [2024-09-27 22:34:07.240654] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.616 [2024-09-27 22:34:07.243570] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.616 [2024-09-27 22:34:07.243623] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.616 [2024-09-27 22:34:07.243745] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:11.616 [2024-09-27 22:34:07.243805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.616 [2024-09-27 22:34:07.244009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.616 [2024-09-27 22:34:07.244120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.616 spare 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.616 [2024-09-27 22:34:07.344068] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:11.616 [2024-09-27 22:34:07.344301] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:11.616 [2024-09-27 22:34:07.344723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:11.616 [2024-09-27 22:34:07.344951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:11.616 [2024-09-27 22:34:07.344979] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:11.616 [2024-09-27 22:34:07.345261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.616 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.616 "name": "raid_bdev1", 00:17:11.616 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:11.616 "strip_size_kb": 0, 00:17:11.616 "state": "online", 00:17:11.616 "raid_level": "raid1", 00:17:11.616 "superblock": true, 00:17:11.616 "num_base_bdevs": 4, 00:17:11.616 "num_base_bdevs_discovered": 3, 00:17:11.616 "num_base_bdevs_operational": 3, 00:17:11.616 "base_bdevs_list": [ 00:17:11.616 { 00:17:11.616 "name": "spare", 00:17:11.616 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:11.616 "is_configured": true, 00:17:11.616 "data_offset": 2048, 00:17:11.616 "data_size": 63488 00:17:11.616 }, 00:17:11.616 { 00:17:11.616 "name": null, 00:17:11.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.616 "is_configured": false, 00:17:11.616 "data_offset": 2048, 00:17:11.616 "data_size": 63488 00:17:11.616 }, 00:17:11.616 { 00:17:11.616 "name": "BaseBdev3", 00:17:11.616 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:11.616 "is_configured": true, 00:17:11.616 "data_offset": 2048, 00:17:11.616 "data_size": 63488 00:17:11.616 }, 00:17:11.616 { 00:17:11.616 "name": "BaseBdev4", 00:17:11.616 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:11.616 "is_configured": true, 00:17:11.616 "data_offset": 2048, 00:17:11.616 "data_size": 63488 00:17:11.617 } 00:17:11.617 ] 00:17:11.617 }' 00:17:11.617 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.617 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.184 "name": "raid_bdev1", 00:17:12.184 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:12.184 "strip_size_kb": 0, 00:17:12.184 "state": "online", 00:17:12.184 "raid_level": "raid1", 00:17:12.184 "superblock": true, 00:17:12.184 "num_base_bdevs": 4, 00:17:12.184 "num_base_bdevs_discovered": 3, 00:17:12.184 "num_base_bdevs_operational": 3, 00:17:12.184 "base_bdevs_list": [ 00:17:12.184 { 00:17:12.184 "name": "spare", 00:17:12.184 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:12.184 "is_configured": true, 00:17:12.184 "data_offset": 2048, 00:17:12.184 "data_size": 63488 00:17:12.184 }, 00:17:12.184 { 00:17:12.184 "name": null, 00:17:12.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.184 "is_configured": false, 00:17:12.184 "data_offset": 2048, 00:17:12.184 "data_size": 63488 00:17:12.184 }, 00:17:12.184 { 00:17:12.184 "name": "BaseBdev3", 00:17:12.184 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:12.184 "is_configured": true, 00:17:12.184 "data_offset": 2048, 00:17:12.184 "data_size": 63488 00:17:12.184 }, 00:17:12.184 { 00:17:12.184 "name": "BaseBdev4", 00:17:12.184 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:12.184 "is_configured": true, 00:17:12.184 "data_offset": 2048, 00:17:12.184 "data_size": 63488 00:17:12.184 } 00:17:12.184 ] 00:17:12.184 }' 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.184 [2024-09-27 22:34:07.992621] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.184 22:34:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.184 22:34:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.184 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.184 22:34:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.184 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.185 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.185 22:34:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.185 "name": "raid_bdev1", 00:17:12.185 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:12.185 "strip_size_kb": 0, 00:17:12.185 "state": "online", 00:17:12.185 "raid_level": "raid1", 00:17:12.185 "superblock": true, 00:17:12.185 "num_base_bdevs": 4, 00:17:12.185 "num_base_bdevs_discovered": 2, 00:17:12.185 "num_base_bdevs_operational": 2, 00:17:12.185 "base_bdevs_list": [ 00:17:12.185 { 00:17:12.185 "name": null, 00:17:12.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.185 "is_configured": false, 00:17:12.185 "data_offset": 0, 00:17:12.185 "data_size": 63488 00:17:12.185 }, 00:17:12.185 { 00:17:12.185 "name": null, 00:17:12.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.185 "is_configured": false, 00:17:12.185 "data_offset": 2048, 00:17:12.185 "data_size": 63488 00:17:12.185 }, 00:17:12.185 { 00:17:12.185 "name": "BaseBdev3", 00:17:12.185 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:12.185 "is_configured": true, 00:17:12.185 "data_offset": 2048, 00:17:12.185 "data_size": 63488 00:17:12.185 }, 00:17:12.185 { 00:17:12.185 "name": "BaseBdev4", 00:17:12.185 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:12.185 "is_configured": true, 00:17:12.185 "data_offset": 2048, 00:17:12.185 "data_size": 63488 00:17:12.185 } 00:17:12.185 ] 00:17:12.185 }' 00:17:12.185 22:34:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.185 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.752 22:34:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.752 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.752 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.752 [2024-09-27 22:34:08.448053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.752 [2024-09-27 22:34:08.448274] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:12.753 [2024-09-27 22:34:08.448291] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:12.753 [2024-09-27 22:34:08.448345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.753 [2024-09-27 22:34:08.466436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:12.753 22:34:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.753 22:34:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:12.753 [2024-09-27 22:34:08.468931] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.687 "name": "raid_bdev1", 00:17:13.687 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:13.687 "strip_size_kb": 0, 00:17:13.687 "state": "online", 00:17:13.687 "raid_level": "raid1", 00:17:13.687 "superblock": true, 00:17:13.687 "num_base_bdevs": 4, 00:17:13.687 "num_base_bdevs_discovered": 3, 00:17:13.687 "num_base_bdevs_operational": 3, 00:17:13.687 "process": { 00:17:13.687 "type": "rebuild", 00:17:13.687 "target": "spare", 00:17:13.687 "progress": { 00:17:13.687 "blocks": 20480, 00:17:13.687 "percent": 32 00:17:13.687 } 00:17:13.687 }, 00:17:13.687 "base_bdevs_list": [ 00:17:13.687 { 00:17:13.687 "name": "spare", 00:17:13.687 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:13.687 "is_configured": true, 00:17:13.687 "data_offset": 2048, 00:17:13.687 "data_size": 63488 00:17:13.687 }, 00:17:13.687 { 00:17:13.687 "name": null, 00:17:13.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.687 "is_configured": false, 00:17:13.687 "data_offset": 2048, 00:17:13.687 "data_size": 63488 00:17:13.687 }, 00:17:13.687 { 00:17:13.687 "name": "BaseBdev3", 00:17:13.687 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:13.687 "is_configured": true, 00:17:13.687 "data_offset": 2048, 00:17:13.687 "data_size": 63488 00:17:13.687 }, 00:17:13.687 { 00:17:13.687 "name": "BaseBdev4", 00:17:13.687 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:13.687 "is_configured": true, 00:17:13.687 "data_offset": 2048, 00:17:13.687 "data_size": 63488 00:17:13.687 } 00:17:13.687 ] 00:17:13.687 }' 00:17:13.687 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.945 [2024-09-27 22:34:09.620687] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.945 [2024-09-27 22:34:09.674877] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.945 [2024-09-27 22:34:09.675249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.945 [2024-09-27 22:34:09.675366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.945 [2024-09-27 22:34:09.675409] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.945 "name": "raid_bdev1", 00:17:13.945 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:13.945 "strip_size_kb": 0, 00:17:13.945 "state": "online", 00:17:13.945 "raid_level": "raid1", 00:17:13.945 "superblock": true, 00:17:13.945 "num_base_bdevs": 4, 00:17:13.945 "num_base_bdevs_discovered": 2, 00:17:13.945 "num_base_bdevs_operational": 2, 00:17:13.945 "base_bdevs_list": [ 00:17:13.945 { 00:17:13.945 "name": null, 00:17:13.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.945 "is_configured": false, 00:17:13.945 "data_offset": 0, 00:17:13.945 "data_size": 63488 00:17:13.945 }, 00:17:13.945 { 00:17:13.945 "name": null, 00:17:13.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.945 "is_configured": false, 00:17:13.945 "data_offset": 2048, 00:17:13.945 "data_size": 63488 00:17:13.945 }, 00:17:13.945 { 00:17:13.945 "name": "BaseBdev3", 00:17:13.945 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:13.945 "is_configured": true, 00:17:13.945 "data_offset": 2048, 00:17:13.945 "data_size": 63488 00:17:13.945 }, 00:17:13.945 { 00:17:13.945 "name": "BaseBdev4", 00:17:13.945 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:13.945 "is_configured": true, 00:17:13.945 "data_offset": 2048, 00:17:13.945 "data_size": 63488 00:17:13.945 } 00:17:13.945 ] 00:17:13.945 }' 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.945 22:34:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.511 22:34:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:14.511 22:34:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.511 22:34:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.511 [2024-09-27 22:34:10.130943] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:14.511 [2024-09-27 22:34:10.131047] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.511 [2024-09-27 22:34:10.131086] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:14.511 [2024-09-27 22:34:10.131100] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.511 [2024-09-27 22:34:10.131634] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.511 [2024-09-27 22:34:10.131673] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:14.511 [2024-09-27 22:34:10.131784] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:14.511 [2024-09-27 22:34:10.131801] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:14.511 [2024-09-27 22:34:10.131816] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.511 [2024-09-27 22:34:10.131862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.511 [2024-09-27 22:34:10.149437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:14.511 spare 00:17:14.511 22:34:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.511 22:34:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:14.511 [2024-09-27 22:34:10.152109] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.451 "name": "raid_bdev1", 00:17:15.451 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:15.451 "strip_size_kb": 0, 00:17:15.451 "state": "online", 00:17:15.451 "raid_level": "raid1", 00:17:15.451 "superblock": true, 00:17:15.451 "num_base_bdevs": 4, 00:17:15.451 "num_base_bdevs_discovered": 3, 00:17:15.451 "num_base_bdevs_operational": 3, 00:17:15.451 "process": { 00:17:15.451 "type": "rebuild", 00:17:15.451 "target": "spare", 00:17:15.451 "progress": { 00:17:15.451 "blocks": 20480, 00:17:15.451 "percent": 32 00:17:15.451 } 00:17:15.451 }, 00:17:15.451 "base_bdevs_list": [ 00:17:15.451 { 00:17:15.451 "name": "spare", 00:17:15.451 "uuid": "3540c135-afe7-59a5-9e51-86a3c812d42c", 00:17:15.451 "is_configured": true, 00:17:15.451 "data_offset": 2048, 00:17:15.451 "data_size": 63488 00:17:15.451 }, 00:17:15.451 { 00:17:15.451 "name": null, 00:17:15.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.451 "is_configured": false, 00:17:15.451 "data_offset": 2048, 00:17:15.451 "data_size": 63488 00:17:15.451 }, 00:17:15.451 { 00:17:15.451 "name": "BaseBdev3", 00:17:15.451 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:15.451 "is_configured": true, 00:17:15.451 "data_offset": 2048, 00:17:15.451 "data_size": 63488 00:17:15.451 }, 00:17:15.451 { 00:17:15.451 "name": "BaseBdev4", 00:17:15.451 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:15.451 "is_configured": true, 00:17:15.451 "data_offset": 2048, 00:17:15.451 "data_size": 63488 00:17:15.451 } 00:17:15.451 ] 00:17:15.451 }' 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.451 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.451 [2024-09-27 22:34:11.308478] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.710 [2024-09-27 22:34:11.358172] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.710 [2024-09-27 22:34:11.358263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.710 [2024-09-27 22:34:11.358285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.710 [2024-09-27 22:34:11.358298] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.710 "name": "raid_bdev1", 00:17:15.710 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:15.710 "strip_size_kb": 0, 00:17:15.710 "state": "online", 00:17:15.710 "raid_level": "raid1", 00:17:15.710 "superblock": true, 00:17:15.710 "num_base_bdevs": 4, 00:17:15.710 "num_base_bdevs_discovered": 2, 00:17:15.710 "num_base_bdevs_operational": 2, 00:17:15.710 "base_bdevs_list": [ 00:17:15.710 { 00:17:15.710 "name": null, 00:17:15.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.710 "is_configured": false, 00:17:15.710 "data_offset": 0, 00:17:15.710 "data_size": 63488 00:17:15.710 }, 00:17:15.710 { 00:17:15.710 "name": null, 00:17:15.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.710 "is_configured": false, 00:17:15.710 "data_offset": 2048, 00:17:15.710 "data_size": 63488 00:17:15.710 }, 00:17:15.710 { 00:17:15.710 "name": "BaseBdev3", 00:17:15.710 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:15.710 "is_configured": true, 00:17:15.710 "data_offset": 2048, 00:17:15.710 "data_size": 63488 00:17:15.710 }, 00:17:15.710 { 00:17:15.710 "name": "BaseBdev4", 00:17:15.710 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:15.710 "is_configured": true, 00:17:15.710 "data_offset": 2048, 00:17:15.710 "data_size": 63488 00:17:15.710 } 00:17:15.710 ] 00:17:15.710 }' 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.710 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.276 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.277 "name": "raid_bdev1", 00:17:16.277 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:16.277 "strip_size_kb": 0, 00:17:16.277 "state": "online", 00:17:16.277 "raid_level": "raid1", 00:17:16.277 "superblock": true, 00:17:16.277 "num_base_bdevs": 4, 00:17:16.277 "num_base_bdevs_discovered": 2, 00:17:16.277 "num_base_bdevs_operational": 2, 00:17:16.277 "base_bdevs_list": [ 00:17:16.277 { 00:17:16.277 "name": null, 00:17:16.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.277 "is_configured": false, 00:17:16.277 "data_offset": 0, 00:17:16.277 "data_size": 63488 00:17:16.277 }, 00:17:16.277 { 00:17:16.277 "name": null, 00:17:16.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.277 "is_configured": false, 00:17:16.277 "data_offset": 2048, 00:17:16.277 "data_size": 63488 00:17:16.277 }, 00:17:16.277 { 00:17:16.277 "name": "BaseBdev3", 00:17:16.277 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:16.277 "is_configured": true, 00:17:16.277 "data_offset": 2048, 00:17:16.277 "data_size": 63488 00:17:16.277 }, 00:17:16.277 { 00:17:16.277 "name": "BaseBdev4", 00:17:16.277 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:16.277 "is_configured": true, 00:17:16.277 "data_offset": 2048, 00:17:16.277 "data_size": 63488 00:17:16.277 } 00:17:16.277 ] 00:17:16.277 }' 00:17:16.277 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.277 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.277 22:34:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.277 [2024-09-27 22:34:12.025520] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:16.277 [2024-09-27 22:34:12.025615] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.277 [2024-09-27 22:34:12.025643] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:16.277 [2024-09-27 22:34:12.025660] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.277 [2024-09-27 22:34:12.026231] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.277 [2024-09-27 22:34:12.026257] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:16.277 [2024-09-27 22:34:12.026370] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:16.277 [2024-09-27 22:34:12.026391] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:16.277 [2024-09-27 22:34:12.026402] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:16.277 [2024-09-27 22:34:12.026420] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:16.277 BaseBdev1 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.277 22:34:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.212 "name": "raid_bdev1", 00:17:17.212 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:17.212 "strip_size_kb": 0, 00:17:17.212 "state": "online", 00:17:17.212 "raid_level": "raid1", 00:17:17.212 "superblock": true, 00:17:17.212 "num_base_bdevs": 4, 00:17:17.212 "num_base_bdevs_discovered": 2, 00:17:17.212 "num_base_bdevs_operational": 2, 00:17:17.212 "base_bdevs_list": [ 00:17:17.212 { 00:17:17.212 "name": null, 00:17:17.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.212 "is_configured": false, 00:17:17.212 "data_offset": 0, 00:17:17.212 "data_size": 63488 00:17:17.212 }, 00:17:17.212 { 00:17:17.212 "name": null, 00:17:17.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.212 "is_configured": false, 00:17:17.212 "data_offset": 2048, 00:17:17.212 "data_size": 63488 00:17:17.212 }, 00:17:17.212 { 00:17:17.212 "name": "BaseBdev3", 00:17:17.212 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:17.212 "is_configured": true, 00:17:17.212 "data_offset": 2048, 00:17:17.212 "data_size": 63488 00:17:17.212 }, 00:17:17.212 { 00:17:17.212 "name": "BaseBdev4", 00:17:17.212 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:17.212 "is_configured": true, 00:17:17.212 "data_offset": 2048, 00:17:17.212 "data_size": 63488 00:17:17.212 } 00:17:17.212 ] 00:17:17.212 }' 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.212 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.780 "name": "raid_bdev1", 00:17:17.780 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:17.780 "strip_size_kb": 0, 00:17:17.780 "state": "online", 00:17:17.780 "raid_level": "raid1", 00:17:17.780 "superblock": true, 00:17:17.780 "num_base_bdevs": 4, 00:17:17.780 "num_base_bdevs_discovered": 2, 00:17:17.780 "num_base_bdevs_operational": 2, 00:17:17.780 "base_bdevs_list": [ 00:17:17.780 { 00:17:17.780 "name": null, 00:17:17.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.780 "is_configured": false, 00:17:17.780 "data_offset": 0, 00:17:17.780 "data_size": 63488 00:17:17.780 }, 00:17:17.780 { 00:17:17.780 "name": null, 00:17:17.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.780 "is_configured": false, 00:17:17.780 "data_offset": 2048, 00:17:17.780 "data_size": 63488 00:17:17.780 }, 00:17:17.780 { 00:17:17.780 "name": "BaseBdev3", 00:17:17.780 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:17.780 "is_configured": true, 00:17:17.780 "data_offset": 2048, 00:17:17.780 "data_size": 63488 00:17:17.780 }, 00:17:17.780 { 00:17:17.780 "name": "BaseBdev4", 00:17:17.780 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:17.780 "is_configured": true, 00:17:17.780 "data_offset": 2048, 00:17:17.780 "data_size": 63488 00:17:17.780 } 00:17:17.780 ] 00:17:17.780 }' 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.780 [2024-09-27 22:34:13.628023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.780 [2024-09-27 22:34:13.629157] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:17.780 [2024-09-27 22:34:13.629184] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.780 request: 00:17:17.780 { 00:17:17.780 "base_bdev": "BaseBdev1", 00:17:17.780 "raid_bdev": "raid_bdev1", 00:17:17.780 "method": "bdev_raid_add_base_bdev", 00:17:17.780 "req_id": 1 00:17:17.780 } 00:17:17.780 Got JSON-RPC error response 00:17:17.780 response: 00:17:17.780 { 00:17:17.780 "code": -22, 00:17:17.780 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:17.780 } 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.780 22:34:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.157 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.158 "name": "raid_bdev1", 00:17:19.158 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:19.158 "strip_size_kb": 0, 00:17:19.158 "state": "online", 00:17:19.158 "raid_level": "raid1", 00:17:19.158 "superblock": true, 00:17:19.158 "num_base_bdevs": 4, 00:17:19.158 "num_base_bdevs_discovered": 2, 00:17:19.158 "num_base_bdevs_operational": 2, 00:17:19.158 "base_bdevs_list": [ 00:17:19.158 { 00:17:19.158 "name": null, 00:17:19.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.158 "is_configured": false, 00:17:19.158 "data_offset": 0, 00:17:19.158 "data_size": 63488 00:17:19.158 }, 00:17:19.158 { 00:17:19.158 "name": null, 00:17:19.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.158 "is_configured": false, 00:17:19.158 "data_offset": 2048, 00:17:19.158 "data_size": 63488 00:17:19.158 }, 00:17:19.158 { 00:17:19.158 "name": "BaseBdev3", 00:17:19.158 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:19.158 "is_configured": true, 00:17:19.158 "data_offset": 2048, 00:17:19.158 "data_size": 63488 00:17:19.158 }, 00:17:19.158 { 00:17:19.158 "name": "BaseBdev4", 00:17:19.158 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:19.158 "is_configured": true, 00:17:19.158 "data_offset": 2048, 00:17:19.158 "data_size": 63488 00:17:19.158 } 00:17:19.158 ] 00:17:19.158 }' 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.158 22:34:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.416 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.417 "name": "raid_bdev1", 00:17:19.417 "uuid": "23893a90-7415-4089-864b-a796e12b26e8", 00:17:19.417 "strip_size_kb": 0, 00:17:19.417 "state": "online", 00:17:19.417 "raid_level": "raid1", 00:17:19.417 "superblock": true, 00:17:19.417 "num_base_bdevs": 4, 00:17:19.417 "num_base_bdevs_discovered": 2, 00:17:19.417 "num_base_bdevs_operational": 2, 00:17:19.417 "base_bdevs_list": [ 00:17:19.417 { 00:17:19.417 "name": null, 00:17:19.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.417 "is_configured": false, 00:17:19.417 "data_offset": 0, 00:17:19.417 "data_size": 63488 00:17:19.417 }, 00:17:19.417 { 00:17:19.417 "name": null, 00:17:19.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.417 "is_configured": false, 00:17:19.417 "data_offset": 2048, 00:17:19.417 "data_size": 63488 00:17:19.417 }, 00:17:19.417 { 00:17:19.417 "name": "BaseBdev3", 00:17:19.417 "uuid": "da427a50-4b14-5d6b-98b4-bae1029b833e", 00:17:19.417 "is_configured": true, 00:17:19.417 "data_offset": 2048, 00:17:19.417 "data_size": 63488 00:17:19.417 }, 00:17:19.417 { 00:17:19.417 "name": "BaseBdev4", 00:17:19.417 "uuid": "fd0d9fac-df17-59c0-a206-20b65cad567b", 00:17:19.417 "is_configured": true, 00:17:19.417 "data_offset": 2048, 00:17:19.417 "data_size": 63488 00:17:19.417 } 00:17:19.417 ] 00:17:19.417 }' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78951 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78951 ']' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78951 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78951 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:19.417 killing process with pid 78951 00:17:19.417 Received shutdown signal, test time was about 60.000000 seconds 00:17:19.417 00:17:19.417 Latency(us) 00:17:19.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.417 =================================================================================================================== 00:17:19.417 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78951' 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78951 00:17:19.417 [2024-09-27 22:34:15.279929] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.417 [2024-09-27 22:34:15.280103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.417 22:34:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78951 00:17:19.417 [2024-09-27 22:34:15.280179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.417 [2024-09-27 22:34:15.280191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:19.984 [2024-09-27 22:34:15.829392] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.520 22:34:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:22.520 00:17:22.520 real 0m29.179s 00:17:22.520 user 0m33.984s 00:17:22.520 sys 0m5.403s 00:17:22.520 22:34:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.520 22:34:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.520 ************************************ 00:17:22.520 END TEST raid_rebuild_test_sb 00:17:22.520 ************************************ 00:17:22.520 22:34:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:22.520 22:34:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:22.520 22:34:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.520 22:34:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.520 ************************************ 00:17:22.520 START TEST raid_rebuild_test_io 00:17:22.520 ************************************ 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79757 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79757 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 79757 ']' 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.520 22:34:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.520 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:22.520 Zero copy mechanism will not be used. 00:17:22.520 [2024-09-27 22:34:18.171377] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:17:22.520 [2024-09-27 22:34:18.171525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79757 ] 00:17:22.520 [2024-09-27 22:34:18.350523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.779 [2024-09-27 22:34:18.609140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.131 [2024-09-27 22:34:18.871351] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.131 [2024-09-27 22:34:18.871394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.700 BaseBdev1_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.700 [2024-09-27 22:34:19.437443] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.700 [2024-09-27 22:34:19.437737] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.700 [2024-09-27 22:34:19.437774] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:23.700 [2024-09-27 22:34:19.437794] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.700 [2024-09-27 22:34:19.440528] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.700 [2024-09-27 22:34:19.440581] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.700 BaseBdev1 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.700 BaseBdev2_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.700 [2024-09-27 22:34:19.502571] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:23.700 [2024-09-27 22:34:19.502658] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.700 [2024-09-27 22:34:19.502689] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:23.700 [2024-09-27 22:34:19.502704] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.700 [2024-09-27 22:34:19.505370] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.700 [2024-09-27 22:34:19.505571] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:23.700 BaseBdev2 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.700 BaseBdev3_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.700 [2024-09-27 22:34:19.568535] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:23.700 [2024-09-27 22:34:19.568611] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.700 [2024-09-27 22:34:19.568637] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:23.700 [2024-09-27 22:34:19.568653] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.700 [2024-09-27 22:34:19.571374] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.700 [2024-09-27 22:34:19.571551] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:23.700 BaseBdev3 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.700 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 BaseBdev4_malloc 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 [2024-09-27 22:34:19.633924] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:23.960 [2024-09-27 22:34:19.634170] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.960 [2024-09-27 22:34:19.634205] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:23.960 [2024-09-27 22:34:19.634220] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.960 [2024-09-27 22:34:19.636869] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.960 [2024-09-27 22:34:19.636921] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:23.960 BaseBdev4 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 spare_malloc 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 spare_delay 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 [2024-09-27 22:34:19.708289] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.960 [2024-09-27 22:34:19.708568] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.960 [2024-09-27 22:34:19.708607] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:23.960 [2024-09-27 22:34:19.708622] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.960 [2024-09-27 22:34:19.711351] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.960 [2024-09-27 22:34:19.711405] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.960 spare 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 [2024-09-27 22:34:19.720342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.960 [2024-09-27 22:34:19.722682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.960 [2024-09-27 22:34:19.722932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.960 [2024-09-27 22:34:19.723015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.960 [2024-09-27 22:34:19.723121] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:23.960 [2024-09-27 22:34:19.723135] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:23.960 [2024-09-27 22:34:19.723484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:23.960 [2024-09-27 22:34:19.723675] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:23.960 [2024-09-27 22:34:19.723688] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:23.960 [2024-09-27 22:34:19.723892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.960 "name": "raid_bdev1", 00:17:23.960 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:23.960 "strip_size_kb": 0, 00:17:23.960 "state": "online", 00:17:23.960 "raid_level": "raid1", 00:17:23.960 "superblock": false, 00:17:23.960 "num_base_bdevs": 4, 00:17:23.960 "num_base_bdevs_discovered": 4, 00:17:23.960 "num_base_bdevs_operational": 4, 00:17:23.960 "base_bdevs_list": [ 00:17:23.960 { 00:17:23.960 "name": "BaseBdev1", 00:17:23.960 "uuid": "b0e3fd46-e1e1-56de-a5dd-48fcf80e59c7", 00:17:23.960 "is_configured": true, 00:17:23.960 "data_offset": 0, 00:17:23.960 "data_size": 65536 00:17:23.960 }, 00:17:23.960 { 00:17:23.960 "name": "BaseBdev2", 00:17:23.960 "uuid": "e30b29e0-faee-5f25-a228-d544313eccaf", 00:17:23.960 "is_configured": true, 00:17:23.960 "data_offset": 0, 00:17:23.960 "data_size": 65536 00:17:23.960 }, 00:17:23.960 { 00:17:23.960 "name": "BaseBdev3", 00:17:23.960 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:23.960 "is_configured": true, 00:17:23.960 "data_offset": 0, 00:17:23.960 "data_size": 65536 00:17:23.960 }, 00:17:23.960 { 00:17:23.960 "name": "BaseBdev4", 00:17:23.960 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:23.960 "is_configured": true, 00:17:23.960 "data_offset": 0, 00:17:23.960 "data_size": 65536 00:17:23.960 } 00:17:23.960 ] 00:17:23.960 }' 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.960 22:34:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:24.530 [2024-09-27 22:34:20.156394] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 [2024-09-27 22:34:20.256039] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.530 "name": "raid_bdev1", 00:17:24.530 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:24.530 "strip_size_kb": 0, 00:17:24.530 "state": "online", 00:17:24.530 "raid_level": "raid1", 00:17:24.530 "superblock": false, 00:17:24.530 "num_base_bdevs": 4, 00:17:24.530 "num_base_bdevs_discovered": 3, 00:17:24.530 "num_base_bdevs_operational": 3, 00:17:24.530 "base_bdevs_list": [ 00:17:24.530 { 00:17:24.530 "name": null, 00:17:24.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.530 "is_configured": false, 00:17:24.530 "data_offset": 0, 00:17:24.530 "data_size": 65536 00:17:24.530 }, 00:17:24.530 { 00:17:24.530 "name": "BaseBdev2", 00:17:24.530 "uuid": "e30b29e0-faee-5f25-a228-d544313eccaf", 00:17:24.530 "is_configured": true, 00:17:24.530 "data_offset": 0, 00:17:24.530 "data_size": 65536 00:17:24.530 }, 00:17:24.530 { 00:17:24.530 "name": "BaseBdev3", 00:17:24.530 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:24.530 "is_configured": true, 00:17:24.530 "data_offset": 0, 00:17:24.530 "data_size": 65536 00:17:24.530 }, 00:17:24.530 { 00:17:24.530 "name": "BaseBdev4", 00:17:24.530 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:24.530 "is_configured": true, 00:17:24.530 "data_offset": 0, 00:17:24.530 "data_size": 65536 00:17:24.530 } 00:17:24.530 ] 00:17:24.530 }' 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.530 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.530 [2024-09-27 22:34:20.362184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:24.530 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:24.530 Zero copy mechanism will not be used. 00:17:24.530 Running I/O for 60 seconds... 00:17:25.098 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.098 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.098 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.098 [2024-09-27 22:34:20.718286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.098 22:34:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.098 22:34:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:25.098 [2024-09-27 22:34:20.782257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:25.099 [2024-09-27 22:34:20.784803] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.099 [2024-09-27 22:34:20.895784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:25.099 [2024-09-27 22:34:20.896435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:25.357 [2024-09-27 22:34:21.116809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:25.357 [2024-09-27 22:34:21.117389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:25.616 162.00 IOPS, 486.00 MiB/s [2024-09-27 22:34:21.459368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:25.875 [2024-09-27 22:34:21.687351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:25.875 [2024-09-27 22:34:21.687993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:26.133 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.134 "name": "raid_bdev1", 00:17:26.134 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:26.134 "strip_size_kb": 0, 00:17:26.134 "state": "online", 00:17:26.134 "raid_level": "raid1", 00:17:26.134 "superblock": false, 00:17:26.134 "num_base_bdevs": 4, 00:17:26.134 "num_base_bdevs_discovered": 4, 00:17:26.134 "num_base_bdevs_operational": 4, 00:17:26.134 "process": { 00:17:26.134 "type": "rebuild", 00:17:26.134 "target": "spare", 00:17:26.134 "progress": { 00:17:26.134 "blocks": 10240, 00:17:26.134 "percent": 15 00:17:26.134 } 00:17:26.134 }, 00:17:26.134 "base_bdevs_list": [ 00:17:26.134 { 00:17:26.134 "name": "spare", 00:17:26.134 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:26.134 "is_configured": true, 00:17:26.134 "data_offset": 0, 00:17:26.134 "data_size": 65536 00:17:26.134 }, 00:17:26.134 { 00:17:26.134 "name": "BaseBdev2", 00:17:26.134 "uuid": "e30b29e0-faee-5f25-a228-d544313eccaf", 00:17:26.134 "is_configured": true, 00:17:26.134 "data_offset": 0, 00:17:26.134 "data_size": 65536 00:17:26.134 }, 00:17:26.134 { 00:17:26.134 "name": "BaseBdev3", 00:17:26.134 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:26.134 "is_configured": true, 00:17:26.134 "data_offset": 0, 00:17:26.134 "data_size": 65536 00:17:26.134 }, 00:17:26.134 { 00:17:26.134 "name": "BaseBdev4", 00:17:26.134 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:26.134 "is_configured": true, 00:17:26.134 "data_offset": 0, 00:17:26.134 "data_size": 65536 00:17:26.134 } 00:17:26.134 ] 00:17:26.134 }' 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.134 22:34:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.134 [2024-09-27 22:34:21.912411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.393 [2024-09-27 22:34:22.035565] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.393 [2024-09-27 22:34:22.047137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.393 [2024-09-27 22:34:22.047235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.393 [2024-09-27 22:34:22.047251] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.393 [2024-09-27 22:34:22.083656] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.393 "name": "raid_bdev1", 00:17:26.393 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:26.393 "strip_size_kb": 0, 00:17:26.393 "state": "online", 00:17:26.393 "raid_level": "raid1", 00:17:26.393 "superblock": false, 00:17:26.393 "num_base_bdevs": 4, 00:17:26.393 "num_base_bdevs_discovered": 3, 00:17:26.393 "num_base_bdevs_operational": 3, 00:17:26.393 "base_bdevs_list": [ 00:17:26.393 { 00:17:26.393 "name": null, 00:17:26.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.393 "is_configured": false, 00:17:26.393 "data_offset": 0, 00:17:26.393 "data_size": 65536 00:17:26.393 }, 00:17:26.393 { 00:17:26.393 "name": "BaseBdev2", 00:17:26.393 "uuid": "e30b29e0-faee-5f25-a228-d544313eccaf", 00:17:26.393 "is_configured": true, 00:17:26.393 "data_offset": 0, 00:17:26.393 "data_size": 65536 00:17:26.393 }, 00:17:26.393 { 00:17:26.393 "name": "BaseBdev3", 00:17:26.393 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:26.393 "is_configured": true, 00:17:26.393 "data_offset": 0, 00:17:26.393 "data_size": 65536 00:17:26.393 }, 00:17:26.393 { 00:17:26.393 "name": "BaseBdev4", 00:17:26.393 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:26.393 "is_configured": true, 00:17:26.393 "data_offset": 0, 00:17:26.393 "data_size": 65536 00:17:26.393 } 00:17:26.393 ] 00:17:26.393 }' 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.393 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 151.00 IOPS, 453.00 MiB/s 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.911 "name": "raid_bdev1", 00:17:26.911 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:26.911 "strip_size_kb": 0, 00:17:26.911 "state": "online", 00:17:26.911 "raid_level": "raid1", 00:17:26.911 "superblock": false, 00:17:26.911 "num_base_bdevs": 4, 00:17:26.911 "num_base_bdevs_discovered": 3, 00:17:26.911 "num_base_bdevs_operational": 3, 00:17:26.911 "base_bdevs_list": [ 00:17:26.911 { 00:17:26.911 "name": null, 00:17:26.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.911 "is_configured": false, 00:17:26.911 "data_offset": 0, 00:17:26.911 "data_size": 65536 00:17:26.911 }, 00:17:26.911 { 00:17:26.911 "name": "BaseBdev2", 00:17:26.911 "uuid": "e30b29e0-faee-5f25-a228-d544313eccaf", 00:17:26.911 "is_configured": true, 00:17:26.911 "data_offset": 0, 00:17:26.911 "data_size": 65536 00:17:26.911 }, 00:17:26.911 { 00:17:26.911 "name": "BaseBdev3", 00:17:26.911 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:26.911 "is_configured": true, 00:17:26.911 "data_offset": 0, 00:17:26.911 "data_size": 65536 00:17:26.911 }, 00:17:26.911 { 00:17:26.911 "name": "BaseBdev4", 00:17:26.911 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:26.911 "is_configured": true, 00:17:26.911 "data_offset": 0, 00:17:26.911 "data_size": 65536 00:17:26.911 } 00:17:26.911 ] 00:17:26.911 }' 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 [2024-09-27 22:34:22.726927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.911 22:34:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:27.169 [2024-09-27 22:34:22.790373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:27.169 [2024-09-27 22:34:22.792866] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.169 [2024-09-27 22:34:22.911661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:27.169 [2024-09-27 22:34:22.912283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:27.429 [2024-09-27 22:34:23.139891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:27.429 [2024-09-27 22:34:23.140696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:27.689 140.33 IOPS, 421.00 MiB/s [2024-09-27 22:34:23.492444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:27.948 [2024-09-27 22:34:23.703788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.948 22:34:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.207 "name": "raid_bdev1", 00:17:28.207 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:28.207 "strip_size_kb": 0, 00:17:28.207 "state": "online", 00:17:28.207 "raid_level": "raid1", 00:17:28.207 "superblock": false, 00:17:28.207 "num_base_bdevs": 4, 00:17:28.207 "num_base_bdevs_discovered": 4, 00:17:28.207 "num_base_bdevs_operational": 4, 00:17:28.207 "process": { 00:17:28.207 "type": "rebuild", 00:17:28.207 "target": "spare", 00:17:28.207 "progress": { 00:17:28.207 "blocks": 10240, 00:17:28.207 "percent": 15 00:17:28.207 } 00:17:28.207 }, 00:17:28.207 "base_bdevs_list": [ 00:17:28.207 { 00:17:28.207 "name": "spare", 00:17:28.207 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:28.207 "is_configured": true, 00:17:28.207 "data_offset": 0, 00:17:28.207 "data_size": 65536 00:17:28.207 }, 00:17:28.207 { 00:17:28.207 "name": "BaseBdev2", 00:17:28.207 "uuid": "e30b29e0-faee-5f25-a228-d544313eccaf", 00:17:28.207 "is_configured": true, 00:17:28.207 "data_offset": 0, 00:17:28.207 "data_size": 65536 00:17:28.207 }, 00:17:28.207 { 00:17:28.207 "name": "BaseBdev3", 00:17:28.207 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:28.207 "is_configured": true, 00:17:28.207 "data_offset": 0, 00:17:28.207 "data_size": 65536 00:17:28.207 }, 00:17:28.207 { 00:17:28.207 "name": "BaseBdev4", 00:17:28.207 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:28.207 "is_configured": true, 00:17:28.207 "data_offset": 0, 00:17:28.207 "data_size": 65536 00:17:28.207 } 00:17:28.207 ] 00:17:28.207 }' 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.207 22:34:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.207 [2024-09-27 22:34:23.962029] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.207 [2024-09-27 22:34:24.050852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:28.466 [2024-09-27 22:34:24.159353] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:28.466 [2024-09-27 22:34:24.159408] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.466 "name": "raid_bdev1", 00:17:28.466 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:28.466 "strip_size_kb": 0, 00:17:28.466 "state": "online", 00:17:28.466 "raid_level": "raid1", 00:17:28.466 "superblock": false, 00:17:28.466 "num_base_bdevs": 4, 00:17:28.466 "num_base_bdevs_discovered": 3, 00:17:28.466 "num_base_bdevs_operational": 3, 00:17:28.466 "process": { 00:17:28.466 "type": "rebuild", 00:17:28.466 "target": "spare", 00:17:28.466 "progress": { 00:17:28.466 "blocks": 14336, 00:17:28.466 "percent": 21 00:17:28.466 } 00:17:28.466 }, 00:17:28.466 "base_bdevs_list": [ 00:17:28.466 { 00:17:28.466 "name": "spare", 00:17:28.466 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:28.466 "is_configured": true, 00:17:28.466 "data_offset": 0, 00:17:28.466 "data_size": 65536 00:17:28.466 }, 00:17:28.466 { 00:17:28.466 "name": null, 00:17:28.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.466 "is_configured": false, 00:17:28.466 "data_offset": 0, 00:17:28.466 "data_size": 65536 00:17:28.466 }, 00:17:28.466 { 00:17:28.466 "name": "BaseBdev3", 00:17:28.466 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:28.466 "is_configured": true, 00:17:28.466 "data_offset": 0, 00:17:28.466 "data_size": 65536 00:17:28.466 }, 00:17:28.466 { 00:17:28.466 "name": "BaseBdev4", 00:17:28.466 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:28.466 "is_configured": true, 00:17:28.466 "data_offset": 0, 00:17:28.466 "data_size": 65536 00:17:28.466 } 00:17:28.466 ] 00:17:28.466 }' 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.466 [2024-09-27 22:34:24.287401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=568 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.466 22:34:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.725 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.725 "name": "raid_bdev1", 00:17:28.725 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:28.725 "strip_size_kb": 0, 00:17:28.725 "state": "online", 00:17:28.725 "raid_level": "raid1", 00:17:28.725 "superblock": false, 00:17:28.725 "num_base_bdevs": 4, 00:17:28.725 "num_base_bdevs_discovered": 3, 00:17:28.725 "num_base_bdevs_operational": 3, 00:17:28.725 "process": { 00:17:28.725 "type": "rebuild", 00:17:28.725 "target": "spare", 00:17:28.725 "progress": { 00:17:28.725 "blocks": 16384, 00:17:28.725 "percent": 25 00:17:28.725 } 00:17:28.725 }, 00:17:28.725 "base_bdevs_list": [ 00:17:28.725 { 00:17:28.725 "name": "spare", 00:17:28.725 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:28.725 "is_configured": true, 00:17:28.725 "data_offset": 0, 00:17:28.725 "data_size": 65536 00:17:28.725 }, 00:17:28.725 { 00:17:28.725 "name": null, 00:17:28.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.725 "is_configured": false, 00:17:28.725 "data_offset": 0, 00:17:28.725 "data_size": 65536 00:17:28.725 }, 00:17:28.725 { 00:17:28.725 "name": "BaseBdev3", 00:17:28.725 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:28.725 "is_configured": true, 00:17:28.725 "data_offset": 0, 00:17:28.725 "data_size": 65536 00:17:28.725 }, 00:17:28.725 { 00:17:28.725 "name": "BaseBdev4", 00:17:28.725 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:28.725 "is_configured": true, 00:17:28.725 "data_offset": 0, 00:17:28.725 "data_size": 65536 00:17:28.725 } 00:17:28.725 ] 00:17:28.725 }' 00:17:28.725 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.725 122.50 IOPS, 367.50 MiB/s 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.725 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.725 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.725 22:34:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.984 [2024-09-27 22:34:24.734275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:29.281 [2024-09-27 22:34:24.975321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:29.281 [2024-09-27 22:34:24.982389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:29.798 113.20 IOPS, 339.60 MiB/s 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.798 "name": "raid_bdev1", 00:17:29.798 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:29.798 "strip_size_kb": 0, 00:17:29.798 "state": "online", 00:17:29.798 "raid_level": "raid1", 00:17:29.798 "superblock": false, 00:17:29.798 "num_base_bdevs": 4, 00:17:29.798 "num_base_bdevs_discovered": 3, 00:17:29.798 "num_base_bdevs_operational": 3, 00:17:29.798 "process": { 00:17:29.798 "type": "rebuild", 00:17:29.798 "target": "spare", 00:17:29.798 "progress": { 00:17:29.798 "blocks": 32768, 00:17:29.798 "percent": 50 00:17:29.798 } 00:17:29.798 }, 00:17:29.798 "base_bdevs_list": [ 00:17:29.798 { 00:17:29.798 "name": "spare", 00:17:29.798 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:29.798 "is_configured": true, 00:17:29.798 "data_offset": 0, 00:17:29.798 "data_size": 65536 00:17:29.798 }, 00:17:29.798 { 00:17:29.798 "name": null, 00:17:29.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.798 "is_configured": false, 00:17:29.798 "data_offset": 0, 00:17:29.798 "data_size": 65536 00:17:29.798 }, 00:17:29.798 { 00:17:29.798 "name": "BaseBdev3", 00:17:29.798 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:29.798 "is_configured": true, 00:17:29.798 "data_offset": 0, 00:17:29.798 "data_size": 65536 00:17:29.798 }, 00:17:29.798 { 00:17:29.798 "name": "BaseBdev4", 00:17:29.798 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:29.798 "is_configured": true, 00:17:29.798 "data_offset": 0, 00:17:29.798 "data_size": 65536 00:17:29.798 } 00:17:29.798 ] 00:17:29.798 }' 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.798 [2024-09-27 22:34:25.545304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.798 22:34:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.365 [2024-09-27 22:34:26.104726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:30.883 102.83 IOPS, 308.50 MiB/s 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.883 "name": "raid_bdev1", 00:17:30.883 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:30.883 "strip_size_kb": 0, 00:17:30.883 "state": "online", 00:17:30.883 "raid_level": "raid1", 00:17:30.883 "superblock": false, 00:17:30.883 "num_base_bdevs": 4, 00:17:30.883 "num_base_bdevs_discovered": 3, 00:17:30.883 "num_base_bdevs_operational": 3, 00:17:30.883 "process": { 00:17:30.883 "type": "rebuild", 00:17:30.883 "target": "spare", 00:17:30.883 "progress": { 00:17:30.883 "blocks": 51200, 00:17:30.883 "percent": 78 00:17:30.883 } 00:17:30.883 }, 00:17:30.883 "base_bdevs_list": [ 00:17:30.883 { 00:17:30.883 "name": "spare", 00:17:30.883 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:30.883 "is_configured": true, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 }, 00:17:30.883 { 00:17:30.883 "name": null, 00:17:30.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.883 "is_configured": false, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 }, 00:17:30.883 { 00:17:30.883 "name": "BaseBdev3", 00:17:30.883 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:30.883 "is_configured": true, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 }, 00:17:30.883 { 00:17:30.883 "name": "BaseBdev4", 00:17:30.883 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:30.883 "is_configured": true, 00:17:30.883 "data_offset": 0, 00:17:30.883 "data_size": 65536 00:17:30.883 } 00:17:30.883 ] 00:17:30.883 }' 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.883 [2024-09-27 22:34:26.641150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:30.883 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.884 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.884 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.884 22:34:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.452 [2024-09-27 22:34:27.294165] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:31.711 93.86 IOPS, 281.57 MiB/s [2024-09-27 22:34:27.400055] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:31.711 [2024-09-27 22:34:27.404059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.971 "name": "raid_bdev1", 00:17:31.971 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:31.971 "strip_size_kb": 0, 00:17:31.971 "state": "online", 00:17:31.971 "raid_level": "raid1", 00:17:31.971 "superblock": false, 00:17:31.971 "num_base_bdevs": 4, 00:17:31.971 "num_base_bdevs_discovered": 3, 00:17:31.971 "num_base_bdevs_operational": 3, 00:17:31.971 "base_bdevs_list": [ 00:17:31.971 { 00:17:31.971 "name": "spare", 00:17:31.971 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:31.971 "is_configured": true, 00:17:31.971 "data_offset": 0, 00:17:31.971 "data_size": 65536 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "name": null, 00:17:31.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.971 "is_configured": false, 00:17:31.971 "data_offset": 0, 00:17:31.971 "data_size": 65536 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "name": "BaseBdev3", 00:17:31.971 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:31.971 "is_configured": true, 00:17:31.971 "data_offset": 0, 00:17:31.971 "data_size": 65536 00:17:31.971 }, 00:17:31.971 { 00:17:31.971 "name": "BaseBdev4", 00:17:31.971 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:31.971 "is_configured": true, 00:17:31.971 "data_offset": 0, 00:17:31.971 "data_size": 65536 00:17:31.971 } 00:17:31.971 ] 00:17:31.971 }' 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:31.971 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.283 "name": "raid_bdev1", 00:17:32.283 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:32.283 "strip_size_kb": 0, 00:17:32.283 "state": "online", 00:17:32.283 "raid_level": "raid1", 00:17:32.283 "superblock": false, 00:17:32.283 "num_base_bdevs": 4, 00:17:32.283 "num_base_bdevs_discovered": 3, 00:17:32.283 "num_base_bdevs_operational": 3, 00:17:32.283 "base_bdevs_list": [ 00:17:32.283 { 00:17:32.283 "name": "spare", 00:17:32.283 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:32.283 "is_configured": true, 00:17:32.283 "data_offset": 0, 00:17:32.283 "data_size": 65536 00:17:32.283 }, 00:17:32.283 { 00:17:32.283 "name": null, 00:17:32.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.283 "is_configured": false, 00:17:32.283 "data_offset": 0, 00:17:32.283 "data_size": 65536 00:17:32.283 }, 00:17:32.283 { 00:17:32.283 "name": "BaseBdev3", 00:17:32.283 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:32.283 "is_configured": true, 00:17:32.283 "data_offset": 0, 00:17:32.283 "data_size": 65536 00:17:32.283 }, 00:17:32.283 { 00:17:32.283 "name": "BaseBdev4", 00:17:32.283 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:32.283 "is_configured": true, 00:17:32.283 "data_offset": 0, 00:17:32.283 "data_size": 65536 00:17:32.283 } 00:17:32.283 ] 00:17:32.283 }' 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.283 22:34:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.283 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.283 "name": "raid_bdev1", 00:17:32.283 "uuid": "ebf4f721-82a5-48c4-886a-91c6fa6430a8", 00:17:32.283 "strip_size_kb": 0, 00:17:32.283 "state": "online", 00:17:32.283 "raid_level": "raid1", 00:17:32.283 "superblock": false, 00:17:32.283 "num_base_bdevs": 4, 00:17:32.283 "num_base_bdevs_discovered": 3, 00:17:32.283 "num_base_bdevs_operational": 3, 00:17:32.283 "base_bdevs_list": [ 00:17:32.283 { 00:17:32.283 "name": "spare", 00:17:32.283 "uuid": "be8b0322-961d-5ebd-867d-e13ba3c2ddb5", 00:17:32.283 "is_configured": true, 00:17:32.283 "data_offset": 0, 00:17:32.283 "data_size": 65536 00:17:32.283 }, 00:17:32.283 { 00:17:32.283 "name": null, 00:17:32.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.283 "is_configured": false, 00:17:32.283 "data_offset": 0, 00:17:32.283 "data_size": 65536 00:17:32.284 }, 00:17:32.284 { 00:17:32.284 "name": "BaseBdev3", 00:17:32.284 "uuid": "684fa3c2-6fcd-5e66-85da-642bd48d67a5", 00:17:32.284 "is_configured": true, 00:17:32.284 "data_offset": 0, 00:17:32.284 "data_size": 65536 00:17:32.284 }, 00:17:32.284 { 00:17:32.284 "name": "BaseBdev4", 00:17:32.284 "uuid": "a33f7c67-d64b-5542-b4e6-29a67a6707cd", 00:17:32.284 "is_configured": true, 00:17:32.284 "data_offset": 0, 00:17:32.284 "data_size": 65536 00:17:32.284 } 00:17:32.284 ] 00:17:32.284 }' 00:17:32.284 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.284 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.803 87.25 IOPS, 261.75 MiB/s 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.803 [2024-09-27 22:34:28.464579] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.803 [2024-09-27 22:34:28.464621] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.803 00:17:32.803 Latency(us) 00:17:32.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.803 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:32.803 raid_bdev1 : 8.16 86.56 259.67 0.00 0.00 15782.39 348.74 112858.78 00:17:32.803 =================================================================================================================== 00:17:32.803 Total : 86.56 259.67 0.00 0.00 15782.39 348.74 112858.78 00:17:32.803 [2024-09-27 22:34:28.533674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.803 { 00:17:32.803 "results": [ 00:17:32.803 { 00:17:32.803 "job": "raid_bdev1", 00:17:32.803 "core_mask": "0x1", 00:17:32.803 "workload": "randrw", 00:17:32.803 "percentage": 50, 00:17:32.803 "status": "finished", 00:17:32.803 "queue_depth": 2, 00:17:32.803 "io_size": 3145728, 00:17:32.803 "runtime": 8.156491, 00:17:32.803 "iops": 86.55682940127072, 00:17:32.803 "mibps": 259.67048820381217, 00:17:32.803 "io_failed": 0, 00:17:32.803 "io_timeout": 0, 00:17:32.803 "avg_latency_us": 15782.394029375291, 00:17:32.803 "min_latency_us": 348.73574297188753, 00:17:32.803 "max_latency_us": 112858.78232931727 00:17:32.803 } 00:17:32.803 ], 00:17:32.803 "core_count": 1 00:17:32.803 } 00:17:32.803 [2024-09-27 22:34:28.534006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.803 [2024-09-27 22:34:28.534149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.803 [2024-09-27 22:34:28.534177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.803 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:33.062 /dev/nbd0 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.062 1+0 records in 00:17:33.062 1+0 records out 00:17:33.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609909 s, 6.7 MB/s 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:33.062 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.063 22:34:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:33.321 /dev/nbd1 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.321 1+0 records in 00:17:33.321 1+0 records out 00:17:33.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301191 s, 13.6 MB/s 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.321 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.580 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.839 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:34.097 /dev/nbd1 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.097 1+0 records in 00:17:34.097 1+0 records out 00:17:34.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417825 s, 9.8 MB/s 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.097 22:34:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.358 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.615 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.615 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.615 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.615 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.615 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.615 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.616 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79757 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 79757 ']' 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 79757 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79757 00:17:34.873 killing process with pid 79757 00:17:34.873 Received shutdown signal, test time was about 10.243614 seconds 00:17:34.873 00:17:34.873 Latency(us) 00:17:34.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.873 =================================================================================================================== 00:17:34.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79757' 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 79757 00:17:34.873 [2024-09-27 22:34:30.592186] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.873 22:34:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 79757 00:17:35.445 [2024-09-27 22:34:31.051045] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.351 22:34:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:37.351 00:17:37.351 real 0m15.153s 00:17:37.351 user 0m18.628s 00:17:37.351 sys 0m2.351s 00:17:37.351 22:34:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.351 ************************************ 00:17:37.351 END TEST raid_rebuild_test_io 00:17:37.351 ************************************ 00:17:37.351 22:34:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.611 22:34:33 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:37.611 22:34:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:37.611 22:34:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.611 22:34:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.611 ************************************ 00:17:37.611 START TEST raid_rebuild_test_sb_io 00:17:37.611 ************************************ 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80185 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80185 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 80185 ']' 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.611 22:34:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.611 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:37.611 Zero copy mechanism will not be used. 00:17:37.611 [2024-09-27 22:34:33.399951] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:17:37.611 [2024-09-27 22:34:33.400107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80185 ] 00:17:37.871 [2024-09-27 22:34:33.573948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.130 [2024-09-27 22:34:33.821815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.389 [2024-09-27 22:34:34.072943] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.389 [2024-09-27 22:34:34.072983] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.956 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 BaseBdev1_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 [2024-09-27 22:34:34.642807] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.957 [2024-09-27 22:34:34.642902] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.957 [2024-09-27 22:34:34.642934] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:38.957 [2024-09-27 22:34:34.642953] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.957 [2024-09-27 22:34:34.645694] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.957 [2024-09-27 22:34:34.645909] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.957 BaseBdev1 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 BaseBdev2_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 [2024-09-27 22:34:34.707143] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:38.957 [2024-09-27 22:34:34.707233] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.957 [2024-09-27 22:34:34.707263] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:38.957 [2024-09-27 22:34:34.707281] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.957 [2024-09-27 22:34:34.710027] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.957 [2024-09-27 22:34:34.710093] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.957 BaseBdev2 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 BaseBdev3_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 [2024-09-27 22:34:34.770783] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:38.957 [2024-09-27 22:34:34.770867] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.957 [2024-09-27 22:34:34.770895] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:38.957 [2024-09-27 22:34:34.770910] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.957 [2024-09-27 22:34:34.773602] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.957 [2024-09-27 22:34:34.773795] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:38.957 BaseBdev3 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 BaseBdev4_malloc 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.957 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 [2024-09-27 22:34:34.836016] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:39.217 [2024-09-27 22:34:34.836268] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.217 [2024-09-27 22:34:34.836303] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:39.217 [2024-09-27 22:34:34.836320] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.217 [2024-09-27 22:34:34.838953] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.217 [2024-09-27 22:34:34.839016] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:39.217 BaseBdev4 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 spare_malloc 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 spare_delay 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 [2024-09-27 22:34:34.912457] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.217 [2024-09-27 22:34:34.912538] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.217 [2024-09-27 22:34:34.912566] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:39.217 [2024-09-27 22:34:34.912582] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.217 [2024-09-27 22:34:34.915304] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.217 [2024-09-27 22:34:34.915357] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.217 spare 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.217 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.218 [2024-09-27 22:34:34.924530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.218 [2024-09-27 22:34:34.927034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.218 [2024-09-27 22:34:34.927120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.218 [2024-09-27 22:34:34.927182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:39.218 [2024-09-27 22:34:34.927436] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.218 [2024-09-27 22:34:34.927452] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:39.218 [2024-09-27 22:34:34.927779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:39.218 [2024-09-27 22:34:34.928025] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.218 [2024-09-27 22:34:34.928040] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.218 [2024-09-27 22:34:34.928234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.218 "name": "raid_bdev1", 00:17:39.218 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:39.218 "strip_size_kb": 0, 00:17:39.218 "state": "online", 00:17:39.218 "raid_level": "raid1", 00:17:39.218 "superblock": true, 00:17:39.218 "num_base_bdevs": 4, 00:17:39.218 "num_base_bdevs_discovered": 4, 00:17:39.218 "num_base_bdevs_operational": 4, 00:17:39.218 "base_bdevs_list": [ 00:17:39.218 { 00:17:39.218 "name": "BaseBdev1", 00:17:39.218 "uuid": "b636507e-b566-5f5b-af09-09d04865cc4f", 00:17:39.218 "is_configured": true, 00:17:39.218 "data_offset": 2048, 00:17:39.218 "data_size": 63488 00:17:39.218 }, 00:17:39.218 { 00:17:39.218 "name": "BaseBdev2", 00:17:39.218 "uuid": "0b53cffa-77f5-5d80-a165-bc93b55952a0", 00:17:39.218 "is_configured": true, 00:17:39.218 "data_offset": 2048, 00:17:39.218 "data_size": 63488 00:17:39.218 }, 00:17:39.218 { 00:17:39.218 "name": "BaseBdev3", 00:17:39.218 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:39.218 "is_configured": true, 00:17:39.218 "data_offset": 2048, 00:17:39.218 "data_size": 63488 00:17:39.218 }, 00:17:39.218 { 00:17:39.218 "name": "BaseBdev4", 00:17:39.218 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:39.218 "is_configured": true, 00:17:39.218 "data_offset": 2048, 00:17:39.218 "data_size": 63488 00:17:39.218 } 00:17:39.218 ] 00:17:39.218 }' 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.218 22:34:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.477 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.477 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.477 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.477 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:39.477 [2024-09-27 22:34:35.328451] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.736 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.736 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:39.736 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.736 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.736 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.737 [2024-09-27 22:34:35.412041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.737 "name": "raid_bdev1", 00:17:39.737 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:39.737 "strip_size_kb": 0, 00:17:39.737 "state": "online", 00:17:39.737 "raid_level": "raid1", 00:17:39.737 "superblock": true, 00:17:39.737 "num_base_bdevs": 4, 00:17:39.737 "num_base_bdevs_discovered": 3, 00:17:39.737 "num_base_bdevs_operational": 3, 00:17:39.737 "base_bdevs_list": [ 00:17:39.737 { 00:17:39.737 "name": null, 00:17:39.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.737 "is_configured": false, 00:17:39.737 "data_offset": 0, 00:17:39.737 "data_size": 63488 00:17:39.737 }, 00:17:39.737 { 00:17:39.737 "name": "BaseBdev2", 00:17:39.737 "uuid": "0b53cffa-77f5-5d80-a165-bc93b55952a0", 00:17:39.737 "is_configured": true, 00:17:39.737 "data_offset": 2048, 00:17:39.737 "data_size": 63488 00:17:39.737 }, 00:17:39.737 { 00:17:39.737 "name": "BaseBdev3", 00:17:39.737 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:39.737 "is_configured": true, 00:17:39.737 "data_offset": 2048, 00:17:39.737 "data_size": 63488 00:17:39.737 }, 00:17:39.737 { 00:17:39.737 "name": "BaseBdev4", 00:17:39.737 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:39.737 "is_configured": true, 00:17:39.737 "data_offset": 2048, 00:17:39.737 "data_size": 63488 00:17:39.737 } 00:17:39.737 ] 00:17:39.737 }' 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.737 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.737 [2024-09-27 22:34:35.522051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:39.737 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:39.737 Zero copy mechanism will not be used. 00:17:39.737 Running I/O for 60 seconds... 00:17:39.997 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.997 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.997 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.997 [2024-09-27 22:34:35.826396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.997 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.997 22:34:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.255 [2024-09-27 22:34:35.900117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:40.255 [2024-09-27 22:34:35.902661] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.255 [2024-09-27 22:34:36.021868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:40.255 [2024-09-27 22:34:36.023585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:40.514 [2024-09-27 22:34:36.267395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:40.514 [2024-09-27 22:34:36.268196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:40.773 142.00 IOPS, 426.00 MiB/s [2024-09-27 22:34:36.639258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:40.773 [2024-09-27 22:34:36.641082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.032 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.033 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.033 [2024-09-27 22:34:36.892844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:41.401 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.401 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.401 "name": "raid_bdev1", 00:17:41.401 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:41.401 "strip_size_kb": 0, 00:17:41.401 "state": "online", 00:17:41.401 "raid_level": "raid1", 00:17:41.401 "superblock": true, 00:17:41.401 "num_base_bdevs": 4, 00:17:41.401 "num_base_bdevs_discovered": 4, 00:17:41.401 "num_base_bdevs_operational": 4, 00:17:41.401 "process": { 00:17:41.401 "type": "rebuild", 00:17:41.401 "target": "spare", 00:17:41.401 "progress": { 00:17:41.401 "blocks": 10240, 00:17:41.401 "percent": 16 00:17:41.401 } 00:17:41.401 }, 00:17:41.401 "base_bdevs_list": [ 00:17:41.401 { 00:17:41.401 "name": "spare", 00:17:41.401 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:41.401 "is_configured": true, 00:17:41.401 "data_offset": 2048, 00:17:41.401 "data_size": 63488 00:17:41.401 }, 00:17:41.401 { 00:17:41.401 "name": "BaseBdev2", 00:17:41.401 "uuid": "0b53cffa-77f5-5d80-a165-bc93b55952a0", 00:17:41.401 "is_configured": true, 00:17:41.401 "data_offset": 2048, 00:17:41.401 "data_size": 63488 00:17:41.401 }, 00:17:41.401 { 00:17:41.401 "name": "BaseBdev3", 00:17:41.401 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:41.401 "is_configured": true, 00:17:41.401 "data_offset": 2048, 00:17:41.401 "data_size": 63488 00:17:41.401 }, 00:17:41.401 { 00:17:41.401 "name": "BaseBdev4", 00:17:41.401 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:41.402 "is_configured": true, 00:17:41.402 "data_offset": 2048, 00:17:41.402 "data_size": 63488 00:17:41.402 } 00:17:41.402 ] 00:17:41.402 }' 00:17:41.402 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.402 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.402 22:34:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.402 [2024-09-27 22:34:37.037458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.402 [2024-09-27 22:34:37.180187] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.402 [2024-09-27 22:34:37.192284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.402 [2024-09-27 22:34:37.192554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.402 [2024-09-27 22:34:37.192608] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.402 [2024-09-27 22:34:37.224376] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.402 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.661 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.661 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.661 "name": "raid_bdev1", 00:17:41.661 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:41.661 "strip_size_kb": 0, 00:17:41.661 "state": "online", 00:17:41.661 "raid_level": "raid1", 00:17:41.661 "superblock": true, 00:17:41.661 "num_base_bdevs": 4, 00:17:41.661 "num_base_bdevs_discovered": 3, 00:17:41.661 "num_base_bdevs_operational": 3, 00:17:41.661 "base_bdevs_list": [ 00:17:41.661 { 00:17:41.661 "name": null, 00:17:41.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.661 "is_configured": false, 00:17:41.661 "data_offset": 0, 00:17:41.661 "data_size": 63488 00:17:41.661 }, 00:17:41.661 { 00:17:41.661 "name": "BaseBdev2", 00:17:41.661 "uuid": "0b53cffa-77f5-5d80-a165-bc93b55952a0", 00:17:41.661 "is_configured": true, 00:17:41.661 "data_offset": 2048, 00:17:41.661 "data_size": 63488 00:17:41.661 }, 00:17:41.661 { 00:17:41.661 "name": "BaseBdev3", 00:17:41.661 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:41.661 "is_configured": true, 00:17:41.661 "data_offset": 2048, 00:17:41.661 "data_size": 63488 00:17:41.661 }, 00:17:41.661 { 00:17:41.661 "name": "BaseBdev4", 00:17:41.661 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:41.661 "is_configured": true, 00:17:41.661 "data_offset": 2048, 00:17:41.661 "data_size": 63488 00:17:41.661 } 00:17:41.661 ] 00:17:41.661 }' 00:17:41.661 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.661 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.920 139.50 IOPS, 418.50 MiB/s 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.920 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.920 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.920 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.920 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.920 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.921 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.921 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.921 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.921 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.921 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.921 "name": "raid_bdev1", 00:17:41.921 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:41.921 "strip_size_kb": 0, 00:17:41.921 "state": "online", 00:17:41.921 "raid_level": "raid1", 00:17:41.921 "superblock": true, 00:17:41.921 "num_base_bdevs": 4, 00:17:41.921 "num_base_bdevs_discovered": 3, 00:17:41.921 "num_base_bdevs_operational": 3, 00:17:41.921 "base_bdevs_list": [ 00:17:41.921 { 00:17:41.921 "name": null, 00:17:41.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.921 "is_configured": false, 00:17:41.921 "data_offset": 0, 00:17:41.921 "data_size": 63488 00:17:41.921 }, 00:17:41.921 { 00:17:41.921 "name": "BaseBdev2", 00:17:41.921 "uuid": "0b53cffa-77f5-5d80-a165-bc93b55952a0", 00:17:41.921 "is_configured": true, 00:17:41.921 "data_offset": 2048, 00:17:41.921 "data_size": 63488 00:17:41.921 }, 00:17:41.921 { 00:17:41.921 "name": "BaseBdev3", 00:17:41.921 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:41.921 "is_configured": true, 00:17:41.921 "data_offset": 2048, 00:17:41.921 "data_size": 63488 00:17:41.921 }, 00:17:41.921 { 00:17:41.921 "name": "BaseBdev4", 00:17:41.921 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:41.921 "is_configured": true, 00:17:41.921 "data_offset": 2048, 00:17:41.921 "data_size": 63488 00:17:41.921 } 00:17:41.921 ] 00:17:41.921 }' 00:17:41.921 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.179 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.179 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.180 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.180 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.180 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.180 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.180 [2024-09-27 22:34:37.847389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.180 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.180 22:34:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.180 [2024-09-27 22:34:37.904181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:42.180 [2024-09-27 22:34:37.906829] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.180 [2024-09-27 22:34:38.030382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:42.749 [2024-09-27 22:34:38.445680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:43.008 157.67 IOPS, 473.00 MiB/s [2024-09-27 22:34:38.880773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.269 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.269 "name": "raid_bdev1", 00:17:43.269 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:43.269 "strip_size_kb": 0, 00:17:43.269 "state": "online", 00:17:43.269 "raid_level": "raid1", 00:17:43.269 "superblock": true, 00:17:43.269 "num_base_bdevs": 4, 00:17:43.269 "num_base_bdevs_discovered": 4, 00:17:43.269 "num_base_bdevs_operational": 4, 00:17:43.269 "process": { 00:17:43.269 "type": "rebuild", 00:17:43.269 "target": "spare", 00:17:43.269 "progress": { 00:17:43.269 "blocks": 14336, 00:17:43.269 "percent": 22 00:17:43.269 } 00:17:43.269 }, 00:17:43.269 "base_bdevs_list": [ 00:17:43.269 { 00:17:43.269 "name": "spare", 00:17:43.269 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:43.269 "is_configured": true, 00:17:43.269 "data_offset": 2048, 00:17:43.269 "data_size": 63488 00:17:43.269 }, 00:17:43.269 { 00:17:43.269 "name": "BaseBdev2", 00:17:43.269 "uuid": "0b53cffa-77f5-5d80-a165-bc93b55952a0", 00:17:43.269 "is_configured": true, 00:17:43.269 "data_offset": 2048, 00:17:43.269 "data_size": 63488 00:17:43.269 }, 00:17:43.269 { 00:17:43.269 "name": "BaseBdev3", 00:17:43.270 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:43.270 "is_configured": true, 00:17:43.270 "data_offset": 2048, 00:17:43.270 "data_size": 63488 00:17:43.270 }, 00:17:43.270 { 00:17:43.270 "name": "BaseBdev4", 00:17:43.270 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:43.270 "is_configured": true, 00:17:43.270 "data_offset": 2048, 00:17:43.270 "data_size": 63488 00:17:43.270 } 00:17:43.270 ] 00:17:43.270 }' 00:17:43.270 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.270 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.270 22:34:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.270 [2024-09-27 22:34:39.014024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:43.270 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.270 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.270 [2024-09-27 22:34:39.055525] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.528 [2024-09-27 22:34:39.396190] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:43.528 [2024-09-27 22:34:39.396483] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:43.786 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.786 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:43.786 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.787 "name": "raid_bdev1", 00:17:43.787 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:43.787 "strip_size_kb": 0, 00:17:43.787 "state": "online", 00:17:43.787 "raid_level": "raid1", 00:17:43.787 "superblock": true, 00:17:43.787 "num_base_bdevs": 4, 00:17:43.787 "num_base_bdevs_discovered": 3, 00:17:43.787 "num_base_bdevs_operational": 3, 00:17:43.787 "process": { 00:17:43.787 "type": "rebuild", 00:17:43.787 "target": "spare", 00:17:43.787 "progress": { 00:17:43.787 "blocks": 18432, 00:17:43.787 "percent": 29 00:17:43.787 } 00:17:43.787 }, 00:17:43.787 "base_bdevs_list": [ 00:17:43.787 { 00:17:43.787 "name": "spare", 00:17:43.787 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:43.787 "is_configured": true, 00:17:43.787 "data_offset": 2048, 00:17:43.787 "data_size": 63488 00:17:43.787 }, 00:17:43.787 { 00:17:43.787 "name": null, 00:17:43.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.787 "is_configured": false, 00:17:43.787 "data_offset": 0, 00:17:43.787 "data_size": 63488 00:17:43.787 }, 00:17:43.787 { 00:17:43.787 "name": "BaseBdev3", 00:17:43.787 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:43.787 "is_configured": true, 00:17:43.787 "data_offset": 2048, 00:17:43.787 "data_size": 63488 00:17:43.787 }, 00:17:43.787 { 00:17:43.787 "name": "BaseBdev4", 00:17:43.787 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:43.787 "is_configured": true, 00:17:43.787 "data_offset": 2048, 00:17:43.787 "data_size": 63488 00:17:43.787 } 00:17:43.787 ] 00:17:43.787 }' 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.787 [2024-09-27 22:34:39.515968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:43.787 135.25 IOPS, 405.75 MiB/s 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=583 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.787 "name": "raid_bdev1", 00:17:43.787 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:43.787 "strip_size_kb": 0, 00:17:43.787 "state": "online", 00:17:43.787 "raid_level": "raid1", 00:17:43.787 "superblock": true, 00:17:43.787 "num_base_bdevs": 4, 00:17:43.787 "num_base_bdevs_discovered": 3, 00:17:43.787 "num_base_bdevs_operational": 3, 00:17:43.787 "process": { 00:17:43.787 "type": "rebuild", 00:17:43.787 "target": "spare", 00:17:43.787 "progress": { 00:17:43.787 "blocks": 20480, 00:17:43.787 "percent": 32 00:17:43.787 } 00:17:43.787 }, 00:17:43.787 "base_bdevs_list": [ 00:17:43.787 { 00:17:43.787 "name": "spare", 00:17:43.787 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:43.787 "is_configured": true, 00:17:43.787 "data_offset": 2048, 00:17:43.787 "data_size": 63488 00:17:43.787 }, 00:17:43.787 { 00:17:43.787 "name": null, 00:17:43.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.787 "is_configured": false, 00:17:43.787 "data_offset": 0, 00:17:43.787 "data_size": 63488 00:17:43.787 }, 00:17:43.787 { 00:17:43.787 "name": "BaseBdev3", 00:17:43.787 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:43.787 "is_configured": true, 00:17:43.787 "data_offset": 2048, 00:17:43.787 "data_size": 63488 00:17:43.787 }, 00:17:43.787 { 00:17:43.787 "name": "BaseBdev4", 00:17:43.787 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:43.787 "is_configured": true, 00:17:43.787 "data_offset": 2048, 00:17:43.787 "data_size": 63488 00:17:43.787 } 00:17:43.787 ] 00:17:43.787 }' 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.787 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.045 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.045 22:34:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.376 [2024-09-27 22:34:39.994687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:44.943 120.00 IOPS, 360.00 MiB/s [2024-09-27 22:34:40.639537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.943 "name": "raid_bdev1", 00:17:44.943 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:44.943 "strip_size_kb": 0, 00:17:44.943 "state": "online", 00:17:44.943 "raid_level": "raid1", 00:17:44.943 "superblock": true, 00:17:44.943 "num_base_bdevs": 4, 00:17:44.943 "num_base_bdevs_discovered": 3, 00:17:44.943 "num_base_bdevs_operational": 3, 00:17:44.943 "process": { 00:17:44.943 "type": "rebuild", 00:17:44.943 "target": "spare", 00:17:44.943 "progress": { 00:17:44.943 "blocks": 40960, 00:17:44.943 "percent": 64 00:17:44.943 } 00:17:44.943 }, 00:17:44.943 "base_bdevs_list": [ 00:17:44.943 { 00:17:44.943 "name": "spare", 00:17:44.943 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:44.943 "is_configured": true, 00:17:44.943 "data_offset": 2048, 00:17:44.943 "data_size": 63488 00:17:44.943 }, 00:17:44.943 { 00:17:44.943 "name": null, 00:17:44.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.943 "is_configured": false, 00:17:44.943 "data_offset": 0, 00:17:44.943 "data_size": 63488 00:17:44.943 }, 00:17:44.943 { 00:17:44.943 "name": "BaseBdev3", 00:17:44.943 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:44.943 "is_configured": true, 00:17:44.943 "data_offset": 2048, 00:17:44.943 "data_size": 63488 00:17:44.943 }, 00:17:44.943 { 00:17:44.943 "name": "BaseBdev4", 00:17:44.943 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:44.943 "is_configured": true, 00:17:44.943 "data_offset": 2048, 00:17:44.943 "data_size": 63488 00:17:44.943 } 00:17:44.943 ] 00:17:44.943 }' 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.943 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.202 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.202 22:34:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.460 [2024-09-27 22:34:41.156127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:45.460 [2024-09-27 22:34:41.273716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:45.718 [2024-09-27 22:34:41.498828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:45.977 106.50 IOPS, 319.50 MiB/s 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.977 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.235 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.235 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.235 "name": "raid_bdev1", 00:17:46.235 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:46.235 "strip_size_kb": 0, 00:17:46.235 "state": "online", 00:17:46.235 "raid_level": "raid1", 00:17:46.235 "superblock": true, 00:17:46.235 "num_base_bdevs": 4, 00:17:46.235 "num_base_bdevs_discovered": 3, 00:17:46.235 "num_base_bdevs_operational": 3, 00:17:46.235 "process": { 00:17:46.235 "type": "rebuild", 00:17:46.235 "target": "spare", 00:17:46.235 "progress": { 00:17:46.235 "blocks": 61440, 00:17:46.235 "percent": 96 00:17:46.235 } 00:17:46.235 }, 00:17:46.235 "base_bdevs_list": [ 00:17:46.235 { 00:17:46.235 "name": "spare", 00:17:46.235 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:46.235 "is_configured": true, 00:17:46.235 "data_offset": 2048, 00:17:46.236 "data_size": 63488 00:17:46.236 }, 00:17:46.236 { 00:17:46.236 "name": null, 00:17:46.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.236 "is_configured": false, 00:17:46.236 "data_offset": 0, 00:17:46.236 "data_size": 63488 00:17:46.236 }, 00:17:46.236 { 00:17:46.236 "name": "BaseBdev3", 00:17:46.236 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:46.236 "is_configured": true, 00:17:46.236 "data_offset": 2048, 00:17:46.236 "data_size": 63488 00:17:46.236 }, 00:17:46.236 { 00:17:46.236 "name": "BaseBdev4", 00:17:46.236 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:46.236 "is_configured": true, 00:17:46.236 "data_offset": 2048, 00:17:46.236 "data_size": 63488 00:17:46.236 } 00:17:46.236 ] 00:17:46.236 }' 00:17:46.236 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.236 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.236 [2024-09-27 22:34:41.935422] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:46.236 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.236 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.236 22:34:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.236 [2024-09-27 22:34:42.039744] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:46.236 [2024-09-27 22:34:42.044046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.379 95.86 IOPS, 287.57 MiB/s 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.379 22:34:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.379 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.379 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.379 "name": "raid_bdev1", 00:17:47.379 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:47.379 "strip_size_kb": 0, 00:17:47.379 "state": "online", 00:17:47.379 "raid_level": "raid1", 00:17:47.379 "superblock": true, 00:17:47.379 "num_base_bdevs": 4, 00:17:47.379 "num_base_bdevs_discovered": 3, 00:17:47.379 "num_base_bdevs_operational": 3, 00:17:47.379 "base_bdevs_list": [ 00:17:47.379 { 00:17:47.379 "name": "spare", 00:17:47.379 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:47.379 "is_configured": true, 00:17:47.379 "data_offset": 2048, 00:17:47.379 "data_size": 63488 00:17:47.379 }, 00:17:47.379 { 00:17:47.379 "name": null, 00:17:47.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.379 "is_configured": false, 00:17:47.380 "data_offset": 0, 00:17:47.380 "data_size": 63488 00:17:47.380 }, 00:17:47.380 { 00:17:47.380 "name": "BaseBdev3", 00:17:47.380 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:47.380 "is_configured": true, 00:17:47.380 "data_offset": 2048, 00:17:47.380 "data_size": 63488 00:17:47.380 }, 00:17:47.380 { 00:17:47.380 "name": "BaseBdev4", 00:17:47.380 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:47.380 "is_configured": true, 00:17:47.380 "data_offset": 2048, 00:17:47.380 "data_size": 63488 00:17:47.380 } 00:17:47.380 ] 00:17:47.380 }' 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.380 "name": "raid_bdev1", 00:17:47.380 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:47.380 "strip_size_kb": 0, 00:17:47.380 "state": "online", 00:17:47.380 "raid_level": "raid1", 00:17:47.380 "superblock": true, 00:17:47.380 "num_base_bdevs": 4, 00:17:47.380 "num_base_bdevs_discovered": 3, 00:17:47.380 "num_base_bdevs_operational": 3, 00:17:47.380 "base_bdevs_list": [ 00:17:47.380 { 00:17:47.380 "name": "spare", 00:17:47.380 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:47.380 "is_configured": true, 00:17:47.380 "data_offset": 2048, 00:17:47.380 "data_size": 63488 00:17:47.380 }, 00:17:47.380 { 00:17:47.380 "name": null, 00:17:47.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.380 "is_configured": false, 00:17:47.380 "data_offset": 0, 00:17:47.380 "data_size": 63488 00:17:47.380 }, 00:17:47.380 { 00:17:47.380 "name": "BaseBdev3", 00:17:47.380 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:47.380 "is_configured": true, 00:17:47.380 "data_offset": 2048, 00:17:47.380 "data_size": 63488 00:17:47.380 }, 00:17:47.380 { 00:17:47.380 "name": "BaseBdev4", 00:17:47.380 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:47.380 "is_configured": true, 00:17:47.380 "data_offset": 2048, 00:17:47.380 "data_size": 63488 00:17:47.380 } 00:17:47.380 ] 00:17:47.380 }' 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.380 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.650 "name": "raid_bdev1", 00:17:47.650 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:47.650 "strip_size_kb": 0, 00:17:47.650 "state": "online", 00:17:47.650 "raid_level": "raid1", 00:17:47.650 "superblock": true, 00:17:47.650 "num_base_bdevs": 4, 00:17:47.650 "num_base_bdevs_discovered": 3, 00:17:47.650 "num_base_bdevs_operational": 3, 00:17:47.650 "base_bdevs_list": [ 00:17:47.650 { 00:17:47.650 "name": "spare", 00:17:47.650 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:47.650 "is_configured": true, 00:17:47.650 "data_offset": 2048, 00:17:47.650 "data_size": 63488 00:17:47.650 }, 00:17:47.650 { 00:17:47.650 "name": null, 00:17:47.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.650 "is_configured": false, 00:17:47.650 "data_offset": 0, 00:17:47.650 "data_size": 63488 00:17:47.650 }, 00:17:47.650 { 00:17:47.650 "name": "BaseBdev3", 00:17:47.650 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:47.650 "is_configured": true, 00:17:47.650 "data_offset": 2048, 00:17:47.650 "data_size": 63488 00:17:47.650 }, 00:17:47.650 { 00:17:47.650 "name": "BaseBdev4", 00:17:47.650 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:47.650 "is_configured": true, 00:17:47.650 "data_offset": 2048, 00:17:47.650 "data_size": 63488 00:17:47.650 } 00:17:47.650 ] 00:17:47.650 }' 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.650 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.909 88.00 IOPS, 264.00 MiB/s 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:47.909 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.909 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.909 [2024-09-27 22:34:43.683771] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:47.909 [2024-09-27 22:34:43.683996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.909 00:17:47.909 Latency(us) 00:17:47.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.909 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:47.909 raid_bdev1 : 8.27 85.95 257.85 0.00 0.00 17517.98 375.06 119596.62 00:17:47.909 =================================================================================================================== 00:17:47.909 Total : 85.95 257.85 0.00 0.00 17517.98 375.06 119596.62 00:17:48.167 [2024-09-27 22:34:43.809366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.167 [2024-09-27 22:34:43.809640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.167 [2024-09-27 22:34:43.809805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:17:48.167 "results": [ 00:17:48.167 { 00:17:48.167 "job": "raid_bdev1", 00:17:48.167 "core_mask": "0x1", 00:17:48.167 "workload": "randrw", 00:17:48.167 "percentage": 50, 00:17:48.167 "status": "finished", 00:17:48.167 "queue_depth": 2, 00:17:48.167 "io_size": 3145728, 00:17:48.167 "runtime": 8.27214, 00:17:48.167 "iops": 85.95115653265056, 00:17:48.167 "mibps": 257.8534695979517, 00:17:48.167 "io_failed": 0, 00:17:48.167 "io_timeout": 0, 00:17:48.167 "avg_latency_us": 17517.982293167042, 00:17:48.167 "min_latency_us": 375.05542168674697, 00:17:48.167 "max_latency_us": 119596.62008032129 00:17:48.167 } 00:17:48.167 ], 00:17:48.167 "core_count": 1 00:17:48.167 } 00:17:48.167 ee all in destruct 00:17:48.167 [2024-09-27 22:34:43.810037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.167 22:34:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:48.426 /dev/nbd0 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:48.426 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.427 1+0 records in 00:17:48.427 1+0 records out 00:17:48.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036624 s, 11.2 MB/s 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.427 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:48.685 /dev/nbd1 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.685 1+0 records in 00:17:48.685 1+0 records out 00:17:48.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340722 s, 12.0 MB/s 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.685 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:48.944 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:49.201 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.202 22:34:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:49.460 /dev/nbd1 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.460 1+0 records in 00:17:49.460 1+0 records out 00:17:49.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287263 s, 14.3 MB/s 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.460 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.718 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.977 [2024-09-27 22:34:45.846386] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.977 [2024-09-27 22:34:45.846456] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.977 [2024-09-27 22:34:45.846489] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:49.977 [2024-09-27 22:34:45.846503] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.977 [2024-09-27 22:34:45.849236] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.977 [2024-09-27 22:34:45.849283] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.977 [2024-09-27 22:34:45.849393] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.977 [2024-09-27 22:34:45.849454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.977 [2024-09-27 22:34:45.849616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.977 [2024-09-27 22:34:45.849714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.977 spare 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.977 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.269 [2024-09-27 22:34:45.949659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:50.269 [2024-09-27 22:34:45.949878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:50.269 [2024-09-27 22:34:45.950309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:50.269 [2024-09-27 22:34:45.950554] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:50.269 [2024-09-27 22:34:45.950571] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:50.269 [2024-09-27 22:34:45.950794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.269 22:34:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.269 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.269 "name": "raid_bdev1", 00:17:50.269 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:50.269 "strip_size_kb": 0, 00:17:50.269 "state": "online", 00:17:50.269 "raid_level": "raid1", 00:17:50.269 "superblock": true, 00:17:50.269 "num_base_bdevs": 4, 00:17:50.269 "num_base_bdevs_discovered": 3, 00:17:50.269 "num_base_bdevs_operational": 3, 00:17:50.269 "base_bdevs_list": [ 00:17:50.269 { 00:17:50.269 "name": "spare", 00:17:50.269 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:50.269 "is_configured": true, 00:17:50.269 "data_offset": 2048, 00:17:50.269 "data_size": 63488 00:17:50.269 }, 00:17:50.269 { 00:17:50.269 "name": null, 00:17:50.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.269 "is_configured": false, 00:17:50.269 "data_offset": 2048, 00:17:50.269 "data_size": 63488 00:17:50.269 }, 00:17:50.269 { 00:17:50.269 "name": "BaseBdev3", 00:17:50.269 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:50.269 "is_configured": true, 00:17:50.269 "data_offset": 2048, 00:17:50.269 "data_size": 63488 00:17:50.269 }, 00:17:50.269 { 00:17:50.269 "name": "BaseBdev4", 00:17:50.269 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:50.269 "is_configured": true, 00:17:50.269 "data_offset": 2048, 00:17:50.269 "data_size": 63488 00:17:50.269 } 00:17:50.269 ] 00:17:50.269 }' 00:17:50.269 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.269 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.839 "name": "raid_bdev1", 00:17:50.839 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:50.839 "strip_size_kb": 0, 00:17:50.839 "state": "online", 00:17:50.839 "raid_level": "raid1", 00:17:50.839 "superblock": true, 00:17:50.839 "num_base_bdevs": 4, 00:17:50.839 "num_base_bdevs_discovered": 3, 00:17:50.839 "num_base_bdevs_operational": 3, 00:17:50.839 "base_bdevs_list": [ 00:17:50.839 { 00:17:50.839 "name": "spare", 00:17:50.839 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:50.839 "is_configured": true, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 }, 00:17:50.839 { 00:17:50.839 "name": null, 00:17:50.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.839 "is_configured": false, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 }, 00:17:50.839 { 00:17:50.839 "name": "BaseBdev3", 00:17:50.839 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:50.839 "is_configured": true, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 }, 00:17:50.839 { 00:17:50.839 "name": "BaseBdev4", 00:17:50.839 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:50.839 "is_configured": true, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 } 00:17:50.839 ] 00:17:50.839 }' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.839 [2024-09-27 22:34:46.642140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.839 "name": "raid_bdev1", 00:17:50.839 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:50.839 "strip_size_kb": 0, 00:17:50.839 "state": "online", 00:17:50.839 "raid_level": "raid1", 00:17:50.839 "superblock": true, 00:17:50.839 "num_base_bdevs": 4, 00:17:50.839 "num_base_bdevs_discovered": 2, 00:17:50.839 "num_base_bdevs_operational": 2, 00:17:50.839 "base_bdevs_list": [ 00:17:50.839 { 00:17:50.839 "name": null, 00:17:50.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.839 "is_configured": false, 00:17:50.839 "data_offset": 0, 00:17:50.839 "data_size": 63488 00:17:50.839 }, 00:17:50.839 { 00:17:50.839 "name": null, 00:17:50.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.839 "is_configured": false, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 }, 00:17:50.839 { 00:17:50.839 "name": "BaseBdev3", 00:17:50.839 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:50.839 "is_configured": true, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 }, 00:17:50.839 { 00:17:50.839 "name": "BaseBdev4", 00:17:50.839 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:50.839 "is_configured": true, 00:17:50.839 "data_offset": 2048, 00:17:50.839 "data_size": 63488 00:17:50.839 } 00:17:50.839 ] 00:17:50.839 }' 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.839 22:34:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.407 22:34:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.407 22:34:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.407 22:34:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.407 [2024-09-27 22:34:47.065624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.407 [2024-09-27 22:34:47.065849] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:51.407 [2024-09-27 22:34:47.065867] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.407 [2024-09-27 22:34:47.065927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.407 [2024-09-27 22:34:47.083764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:51.407 22:34:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.407 22:34:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:51.407 [2024-09-27 22:34:47.086187] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.345 "name": "raid_bdev1", 00:17:52.345 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:52.345 "strip_size_kb": 0, 00:17:52.345 "state": "online", 00:17:52.345 "raid_level": "raid1", 00:17:52.345 "superblock": true, 00:17:52.345 "num_base_bdevs": 4, 00:17:52.345 "num_base_bdevs_discovered": 3, 00:17:52.345 "num_base_bdevs_operational": 3, 00:17:52.345 "process": { 00:17:52.345 "type": "rebuild", 00:17:52.345 "target": "spare", 00:17:52.345 "progress": { 00:17:52.345 "blocks": 20480, 00:17:52.345 "percent": 32 00:17:52.345 } 00:17:52.345 }, 00:17:52.345 "base_bdevs_list": [ 00:17:52.345 { 00:17:52.345 "name": "spare", 00:17:52.345 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:52.345 "is_configured": true, 00:17:52.345 "data_offset": 2048, 00:17:52.345 "data_size": 63488 00:17:52.345 }, 00:17:52.345 { 00:17:52.345 "name": null, 00:17:52.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.345 "is_configured": false, 00:17:52.345 "data_offset": 2048, 00:17:52.345 "data_size": 63488 00:17:52.345 }, 00:17:52.345 { 00:17:52.345 "name": "BaseBdev3", 00:17:52.345 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:52.345 "is_configured": true, 00:17:52.345 "data_offset": 2048, 00:17:52.345 "data_size": 63488 00:17:52.345 }, 00:17:52.345 { 00:17:52.345 "name": "BaseBdev4", 00:17:52.345 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:52.345 "is_configured": true, 00:17:52.345 "data_offset": 2048, 00:17:52.345 "data_size": 63488 00:17:52.345 } 00:17:52.345 ] 00:17:52.345 }' 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.345 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:52.346 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.346 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.605 [2024-09-27 22:34:48.226228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.605 [2024-09-27 22:34:48.292555] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.605 [2024-09-27 22:34:48.292963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.605 [2024-09-27 22:34:48.293019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.605 [2024-09-27 22:34:48.293032] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.605 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.605 "name": "raid_bdev1", 00:17:52.605 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:52.605 "strip_size_kb": 0, 00:17:52.605 "state": "online", 00:17:52.605 "raid_level": "raid1", 00:17:52.605 "superblock": true, 00:17:52.605 "num_base_bdevs": 4, 00:17:52.605 "num_base_bdevs_discovered": 2, 00:17:52.605 "num_base_bdevs_operational": 2, 00:17:52.605 "base_bdevs_list": [ 00:17:52.605 { 00:17:52.605 "name": null, 00:17:52.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.605 "is_configured": false, 00:17:52.605 "data_offset": 0, 00:17:52.605 "data_size": 63488 00:17:52.606 }, 00:17:52.606 { 00:17:52.606 "name": null, 00:17:52.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.606 "is_configured": false, 00:17:52.606 "data_offset": 2048, 00:17:52.606 "data_size": 63488 00:17:52.606 }, 00:17:52.606 { 00:17:52.606 "name": "BaseBdev3", 00:17:52.606 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:52.606 "is_configured": true, 00:17:52.606 "data_offset": 2048, 00:17:52.606 "data_size": 63488 00:17:52.606 }, 00:17:52.606 { 00:17:52.606 "name": "BaseBdev4", 00:17:52.606 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:52.606 "is_configured": true, 00:17:52.606 "data_offset": 2048, 00:17:52.606 "data_size": 63488 00:17:52.606 } 00:17:52.606 ] 00:17:52.606 }' 00:17:52.606 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.606 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.174 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.174 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.174 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.174 [2024-09-27 22:34:48.801158] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.174 [2024-09-27 22:34:48.801374] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.174 [2024-09-27 22:34:48.801425] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:53.174 [2024-09-27 22:34:48.801438] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.174 [2024-09-27 22:34:48.802019] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.174 [2024-09-27 22:34:48.802044] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.174 [2024-09-27 22:34:48.802162] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:53.174 [2024-09-27 22:34:48.802178] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:53.174 [2024-09-27 22:34:48.802195] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:53.174 [2024-09-27 22:34:48.802225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.174 [2024-09-27 22:34:48.820105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:53.174 spare 00:17:53.174 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.174 22:34:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:53.174 [2024-09-27 22:34:48.822732] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.111 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.112 "name": "raid_bdev1", 00:17:54.112 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:54.112 "strip_size_kb": 0, 00:17:54.112 "state": "online", 00:17:54.112 "raid_level": "raid1", 00:17:54.112 "superblock": true, 00:17:54.112 "num_base_bdevs": 4, 00:17:54.112 "num_base_bdevs_discovered": 3, 00:17:54.112 "num_base_bdevs_operational": 3, 00:17:54.112 "process": { 00:17:54.112 "type": "rebuild", 00:17:54.112 "target": "spare", 00:17:54.112 "progress": { 00:17:54.112 "blocks": 20480, 00:17:54.112 "percent": 32 00:17:54.112 } 00:17:54.112 }, 00:17:54.112 "base_bdevs_list": [ 00:17:54.112 { 00:17:54.112 "name": "spare", 00:17:54.112 "uuid": "577e8162-203e-5404-83d5-075a69868565", 00:17:54.112 "is_configured": true, 00:17:54.112 "data_offset": 2048, 00:17:54.112 "data_size": 63488 00:17:54.112 }, 00:17:54.112 { 00:17:54.112 "name": null, 00:17:54.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.112 "is_configured": false, 00:17:54.112 "data_offset": 2048, 00:17:54.112 "data_size": 63488 00:17:54.112 }, 00:17:54.112 { 00:17:54.112 "name": "BaseBdev3", 00:17:54.112 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:54.112 "is_configured": true, 00:17:54.112 "data_offset": 2048, 00:17:54.112 "data_size": 63488 00:17:54.112 }, 00:17:54.112 { 00:17:54.112 "name": "BaseBdev4", 00:17:54.112 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:54.112 "is_configured": true, 00:17:54.112 "data_offset": 2048, 00:17:54.112 "data_size": 63488 00:17:54.112 } 00:17:54.112 ] 00:17:54.112 }' 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.112 22:34:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.371 [2024-09-27 22:34:49.990085] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.371 [2024-09-27 22:34:50.028694] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.371 [2024-09-27 22:34:50.028801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.371 [2024-09-27 22:34:50.028821] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.371 [2024-09-27 22:34:50.028834] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.371 "name": "raid_bdev1", 00:17:54.371 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:54.371 "strip_size_kb": 0, 00:17:54.371 "state": "online", 00:17:54.371 "raid_level": "raid1", 00:17:54.371 "superblock": true, 00:17:54.371 "num_base_bdevs": 4, 00:17:54.371 "num_base_bdevs_discovered": 2, 00:17:54.371 "num_base_bdevs_operational": 2, 00:17:54.371 "base_bdevs_list": [ 00:17:54.371 { 00:17:54.371 "name": null, 00:17:54.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.371 "is_configured": false, 00:17:54.371 "data_offset": 0, 00:17:54.371 "data_size": 63488 00:17:54.371 }, 00:17:54.371 { 00:17:54.371 "name": null, 00:17:54.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.371 "is_configured": false, 00:17:54.371 "data_offset": 2048, 00:17:54.371 "data_size": 63488 00:17:54.371 }, 00:17:54.371 { 00:17:54.371 "name": "BaseBdev3", 00:17:54.371 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:54.371 "is_configured": true, 00:17:54.371 "data_offset": 2048, 00:17:54.371 "data_size": 63488 00:17:54.371 }, 00:17:54.371 { 00:17:54.371 "name": "BaseBdev4", 00:17:54.371 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:54.371 "is_configured": true, 00:17:54.371 "data_offset": 2048, 00:17:54.371 "data_size": 63488 00:17:54.371 } 00:17:54.371 ] 00:17:54.371 }' 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.371 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.633 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.893 "name": "raid_bdev1", 00:17:54.893 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:54.893 "strip_size_kb": 0, 00:17:54.893 "state": "online", 00:17:54.893 "raid_level": "raid1", 00:17:54.893 "superblock": true, 00:17:54.893 "num_base_bdevs": 4, 00:17:54.893 "num_base_bdevs_discovered": 2, 00:17:54.893 "num_base_bdevs_operational": 2, 00:17:54.893 "base_bdevs_list": [ 00:17:54.893 { 00:17:54.893 "name": null, 00:17:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.893 "is_configured": false, 00:17:54.893 "data_offset": 0, 00:17:54.893 "data_size": 63488 00:17:54.893 }, 00:17:54.893 { 00:17:54.893 "name": null, 00:17:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.893 "is_configured": false, 00:17:54.893 "data_offset": 2048, 00:17:54.893 "data_size": 63488 00:17:54.893 }, 00:17:54.893 { 00:17:54.893 "name": "BaseBdev3", 00:17:54.893 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:54.893 "is_configured": true, 00:17:54.893 "data_offset": 2048, 00:17:54.893 "data_size": 63488 00:17:54.893 }, 00:17:54.893 { 00:17:54.893 "name": "BaseBdev4", 00:17:54.893 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:54.893 "is_configured": true, 00:17:54.893 "data_offset": 2048, 00:17:54.893 "data_size": 63488 00:17:54.893 } 00:17:54.893 ] 00:17:54.893 }' 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.893 [2024-09-27 22:34:50.676140] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:54.893 [2024-09-27 22:34:50.676229] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.893 [2024-09-27 22:34:50.676255] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:54.893 [2024-09-27 22:34:50.676270] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.893 [2024-09-27 22:34:50.676794] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.893 [2024-09-27 22:34:50.676826] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:54.893 [2024-09-27 22:34:50.676932] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:54.893 [2024-09-27 22:34:50.676954] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:54.893 [2024-09-27 22:34:50.676965] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:54.893 [2024-09-27 22:34:50.677133] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:54.893 BaseBdev1 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.893 22:34:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.829 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.088 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.088 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.088 "name": "raid_bdev1", 00:17:56.088 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:56.088 "strip_size_kb": 0, 00:17:56.088 "state": "online", 00:17:56.088 "raid_level": "raid1", 00:17:56.088 "superblock": true, 00:17:56.088 "num_base_bdevs": 4, 00:17:56.088 "num_base_bdevs_discovered": 2, 00:17:56.088 "num_base_bdevs_operational": 2, 00:17:56.088 "base_bdevs_list": [ 00:17:56.088 { 00:17:56.088 "name": null, 00:17:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.088 "is_configured": false, 00:17:56.088 "data_offset": 0, 00:17:56.088 "data_size": 63488 00:17:56.088 }, 00:17:56.088 { 00:17:56.088 "name": null, 00:17:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.088 "is_configured": false, 00:17:56.088 "data_offset": 2048, 00:17:56.088 "data_size": 63488 00:17:56.088 }, 00:17:56.088 { 00:17:56.088 "name": "BaseBdev3", 00:17:56.088 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:56.088 "is_configured": true, 00:17:56.088 "data_offset": 2048, 00:17:56.088 "data_size": 63488 00:17:56.088 }, 00:17:56.088 { 00:17:56.088 "name": "BaseBdev4", 00:17:56.088 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:56.088 "is_configured": true, 00:17:56.088 "data_offset": 2048, 00:17:56.088 "data_size": 63488 00:17:56.088 } 00:17:56.088 ] 00:17:56.088 }' 00:17:56.088 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.088 22:34:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.357 "name": "raid_bdev1", 00:17:56.357 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:56.357 "strip_size_kb": 0, 00:17:56.357 "state": "online", 00:17:56.357 "raid_level": "raid1", 00:17:56.357 "superblock": true, 00:17:56.357 "num_base_bdevs": 4, 00:17:56.357 "num_base_bdevs_discovered": 2, 00:17:56.357 "num_base_bdevs_operational": 2, 00:17:56.357 "base_bdevs_list": [ 00:17:56.357 { 00:17:56.357 "name": null, 00:17:56.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.357 "is_configured": false, 00:17:56.357 "data_offset": 0, 00:17:56.357 "data_size": 63488 00:17:56.357 }, 00:17:56.357 { 00:17:56.357 "name": null, 00:17:56.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.357 "is_configured": false, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 }, 00:17:56.357 { 00:17:56.357 "name": "BaseBdev3", 00:17:56.357 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:56.357 "is_configured": true, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 }, 00:17:56.357 { 00:17:56.357 "name": "BaseBdev4", 00:17:56.357 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:56.357 "is_configured": true, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 } 00:17:56.357 ] 00:17:56.357 }' 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.357 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.616 [2024-09-27 22:34:52.272110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.616 [2024-09-27 22:34:52.272340] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:56.616 [2024-09-27 22:34:52.272361] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.616 request: 00:17:56.616 { 00:17:56.616 "base_bdev": "BaseBdev1", 00:17:56.616 "raid_bdev": "raid_bdev1", 00:17:56.616 "method": "bdev_raid_add_base_bdev", 00:17:56.616 "req_id": 1 00:17:56.616 } 00:17:56.616 Got JSON-RPC error response 00:17:56.616 response: 00:17:56.616 { 00:17:56.616 "code": -22, 00:17:56.616 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:56.616 } 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.616 22:34:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.553 "name": "raid_bdev1", 00:17:57.553 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:57.553 "strip_size_kb": 0, 00:17:57.553 "state": "online", 00:17:57.553 "raid_level": "raid1", 00:17:57.553 "superblock": true, 00:17:57.553 "num_base_bdevs": 4, 00:17:57.553 "num_base_bdevs_discovered": 2, 00:17:57.553 "num_base_bdevs_operational": 2, 00:17:57.553 "base_bdevs_list": [ 00:17:57.553 { 00:17:57.553 "name": null, 00:17:57.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.553 "is_configured": false, 00:17:57.553 "data_offset": 0, 00:17:57.553 "data_size": 63488 00:17:57.553 }, 00:17:57.553 { 00:17:57.553 "name": null, 00:17:57.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.553 "is_configured": false, 00:17:57.553 "data_offset": 2048, 00:17:57.553 "data_size": 63488 00:17:57.553 }, 00:17:57.553 { 00:17:57.553 "name": "BaseBdev3", 00:17:57.553 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:57.553 "is_configured": true, 00:17:57.553 "data_offset": 2048, 00:17:57.553 "data_size": 63488 00:17:57.553 }, 00:17:57.553 { 00:17:57.553 "name": "BaseBdev4", 00:17:57.553 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:57.553 "is_configured": true, 00:17:57.553 "data_offset": 2048, 00:17:57.553 "data_size": 63488 00:17:57.553 } 00:17:57.553 ] 00:17:57.553 }' 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.553 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.119 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.119 "name": "raid_bdev1", 00:17:58.119 "uuid": "6dae0d10-12ce-477b-8753-5169213c4fed", 00:17:58.119 "strip_size_kb": 0, 00:17:58.119 "state": "online", 00:17:58.119 "raid_level": "raid1", 00:17:58.119 "superblock": true, 00:17:58.119 "num_base_bdevs": 4, 00:17:58.119 "num_base_bdevs_discovered": 2, 00:17:58.119 "num_base_bdevs_operational": 2, 00:17:58.119 "base_bdevs_list": [ 00:17:58.119 { 00:17:58.119 "name": null, 00:17:58.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.119 "is_configured": false, 00:17:58.119 "data_offset": 0, 00:17:58.119 "data_size": 63488 00:17:58.119 }, 00:17:58.119 { 00:17:58.119 "name": null, 00:17:58.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.119 "is_configured": false, 00:17:58.119 "data_offset": 2048, 00:17:58.120 "data_size": 63488 00:17:58.120 }, 00:17:58.120 { 00:17:58.120 "name": "BaseBdev3", 00:17:58.120 "uuid": "720590c1-83b2-5967-90ba-976971f3e97c", 00:17:58.120 "is_configured": true, 00:17:58.120 "data_offset": 2048, 00:17:58.120 "data_size": 63488 00:17:58.120 }, 00:17:58.120 { 00:17:58.120 "name": "BaseBdev4", 00:17:58.120 "uuid": "65ee458c-21f4-5de8-b03f-3729f22a0321", 00:17:58.120 "is_configured": true, 00:17:58.120 "data_offset": 2048, 00:17:58.120 "data_size": 63488 00:17:58.120 } 00:17:58.120 ] 00:17:58.120 }' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 80185 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 80185 ']' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 80185 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80185 00:17:58.120 killing process with pid 80185 00:17:58.120 Received shutdown signal, test time was about 18.453455 seconds 00:17:58.120 00:17:58.120 Latency(us) 00:17:58.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.120 =================================================================================================================== 00:17:58.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80185' 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 80185 00:17:58.120 [2024-09-27 22:34:53.948376] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.120 22:34:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 80185 00:17:58.120 [2024-09-27 22:34:53.948517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.120 [2024-09-27 22:34:53.948597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.120 [2024-09-27 22:34:53.948610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:58.686 [2024-09-27 22:34:54.413396] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.220 ************************************ 00:18:01.220 END TEST raid_rebuild_test_sb_io 00:18:01.220 ************************************ 00:18:01.220 22:34:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:01.220 00:18:01.220 real 0m23.355s 00:18:01.220 user 0m29.784s 00:18:01.220 sys 0m3.223s 00:18:01.220 22:34:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.220 22:34:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.220 22:34:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:01.220 22:34:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:01.220 22:34:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:01.220 22:34:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.220 22:34:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.220 ************************************ 00:18:01.220 START TEST raid5f_state_function_test 00:18:01.220 ************************************ 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:01.220 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80924 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80924' 00:18:01.221 Process raid pid: 80924 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80924 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80924 ']' 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.221 22:34:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.221 [2024-09-27 22:34:56.841516] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:18:01.221 [2024-09-27 22:34:56.841659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.221 [2024-09-27 22:34:57.019328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.479 [2024-09-27 22:34:57.287283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.739 [2024-09-27 22:34:57.554844] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.739 [2024-09-27 22:34:57.554893] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.307 [2024-09-27 22:34:58.079430] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.307 [2024-09-27 22:34:58.079625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.307 [2024-09-27 22:34:58.079639] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.307 [2024-09-27 22:34:58.079657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.307 [2024-09-27 22:34:58.079665] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.307 [2024-09-27 22:34:58.079678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.307 "name": "Existed_Raid", 00:18:02.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.307 "strip_size_kb": 64, 00:18:02.307 "state": "configuring", 00:18:02.307 "raid_level": "raid5f", 00:18:02.307 "superblock": false, 00:18:02.307 "num_base_bdevs": 3, 00:18:02.307 "num_base_bdevs_discovered": 0, 00:18:02.307 "num_base_bdevs_operational": 3, 00:18:02.307 "base_bdevs_list": [ 00:18:02.307 { 00:18:02.307 "name": "BaseBdev1", 00:18:02.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.307 "is_configured": false, 00:18:02.307 "data_offset": 0, 00:18:02.307 "data_size": 0 00:18:02.307 }, 00:18:02.307 { 00:18:02.307 "name": "BaseBdev2", 00:18:02.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.307 "is_configured": false, 00:18:02.307 "data_offset": 0, 00:18:02.307 "data_size": 0 00:18:02.307 }, 00:18:02.307 { 00:18:02.307 "name": "BaseBdev3", 00:18:02.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.307 "is_configured": false, 00:18:02.307 "data_offset": 0, 00:18:02.307 "data_size": 0 00:18:02.307 } 00:18:02.307 ] 00:18:02.307 }' 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.307 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.881 [2024-09-27 22:34:58.546833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.881 [2024-09-27 22:34:58.546882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.881 [2024-09-27 22:34:58.558826] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.881 [2024-09-27 22:34:58.558890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.881 [2024-09-27 22:34:58.558902] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.881 [2024-09-27 22:34:58.558916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.881 [2024-09-27 22:34:58.558925] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.881 [2024-09-27 22:34:58.558938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.881 [2024-09-27 22:34:58.617903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.881 BaseBdev1 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:02.881 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.882 [ 00:18:02.882 { 00:18:02.882 "name": "BaseBdev1", 00:18:02.882 "aliases": [ 00:18:02.882 "4cad1857-d3e3-43e5-8791-a8ba97ff27a0" 00:18:02.882 ], 00:18:02.882 "product_name": "Malloc disk", 00:18:02.882 "block_size": 512, 00:18:02.882 "num_blocks": 65536, 00:18:02.882 "uuid": "4cad1857-d3e3-43e5-8791-a8ba97ff27a0", 00:18:02.882 "assigned_rate_limits": { 00:18:02.882 "rw_ios_per_sec": 0, 00:18:02.882 "rw_mbytes_per_sec": 0, 00:18:02.882 "r_mbytes_per_sec": 0, 00:18:02.882 "w_mbytes_per_sec": 0 00:18:02.882 }, 00:18:02.882 "claimed": true, 00:18:02.882 "claim_type": "exclusive_write", 00:18:02.882 "zoned": false, 00:18:02.882 "supported_io_types": { 00:18:02.882 "read": true, 00:18:02.882 "write": true, 00:18:02.882 "unmap": true, 00:18:02.882 "flush": true, 00:18:02.882 "reset": true, 00:18:02.882 "nvme_admin": false, 00:18:02.882 "nvme_io": false, 00:18:02.882 "nvme_io_md": false, 00:18:02.882 "write_zeroes": true, 00:18:02.882 "zcopy": true, 00:18:02.882 "get_zone_info": false, 00:18:02.882 "zone_management": false, 00:18:02.882 "zone_append": false, 00:18:02.882 "compare": false, 00:18:02.882 "compare_and_write": false, 00:18:02.882 "abort": true, 00:18:02.882 "seek_hole": false, 00:18:02.882 "seek_data": false, 00:18:02.882 "copy": true, 00:18:02.882 "nvme_iov_md": false 00:18:02.882 }, 00:18:02.882 "memory_domains": [ 00:18:02.882 { 00:18:02.882 "dma_device_id": "system", 00:18:02.882 "dma_device_type": 1 00:18:02.882 }, 00:18:02.882 { 00:18:02.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.882 "dma_device_type": 2 00:18:02.882 } 00:18:02.882 ], 00:18:02.882 "driver_specific": {} 00:18:02.882 } 00:18:02.882 ] 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.882 "name": "Existed_Raid", 00:18:02.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.882 "strip_size_kb": 64, 00:18:02.882 "state": "configuring", 00:18:02.882 "raid_level": "raid5f", 00:18:02.882 "superblock": false, 00:18:02.882 "num_base_bdevs": 3, 00:18:02.882 "num_base_bdevs_discovered": 1, 00:18:02.882 "num_base_bdevs_operational": 3, 00:18:02.882 "base_bdevs_list": [ 00:18:02.882 { 00:18:02.882 "name": "BaseBdev1", 00:18:02.882 "uuid": "4cad1857-d3e3-43e5-8791-a8ba97ff27a0", 00:18:02.882 "is_configured": true, 00:18:02.882 "data_offset": 0, 00:18:02.882 "data_size": 65536 00:18:02.882 }, 00:18:02.882 { 00:18:02.882 "name": "BaseBdev2", 00:18:02.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.882 "is_configured": false, 00:18:02.882 "data_offset": 0, 00:18:02.882 "data_size": 0 00:18:02.882 }, 00:18:02.882 { 00:18:02.882 "name": "BaseBdev3", 00:18:02.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.882 "is_configured": false, 00:18:02.882 "data_offset": 0, 00:18:02.882 "data_size": 0 00:18:02.882 } 00:18:02.882 ] 00:18:02.882 }' 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.882 22:34:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.449 [2024-09-27 22:34:59.085337] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.449 [2024-09-27 22:34:59.085398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.449 [2024-09-27 22:34:59.093421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.449 [2024-09-27 22:34:59.096040] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:03.449 [2024-09-27 22:34:59.096097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:03.449 [2024-09-27 22:34:59.096110] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:03.449 [2024-09-27 22:34:59.096124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.449 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.450 "name": "Existed_Raid", 00:18:03.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.450 "strip_size_kb": 64, 00:18:03.450 "state": "configuring", 00:18:03.450 "raid_level": "raid5f", 00:18:03.450 "superblock": false, 00:18:03.450 "num_base_bdevs": 3, 00:18:03.450 "num_base_bdevs_discovered": 1, 00:18:03.450 "num_base_bdevs_operational": 3, 00:18:03.450 "base_bdevs_list": [ 00:18:03.450 { 00:18:03.450 "name": "BaseBdev1", 00:18:03.450 "uuid": "4cad1857-d3e3-43e5-8791-a8ba97ff27a0", 00:18:03.450 "is_configured": true, 00:18:03.450 "data_offset": 0, 00:18:03.450 "data_size": 65536 00:18:03.450 }, 00:18:03.450 { 00:18:03.450 "name": "BaseBdev2", 00:18:03.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.450 "is_configured": false, 00:18:03.450 "data_offset": 0, 00:18:03.450 "data_size": 0 00:18:03.450 }, 00:18:03.450 { 00:18:03.450 "name": "BaseBdev3", 00:18:03.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.450 "is_configured": false, 00:18:03.450 "data_offset": 0, 00:18:03.450 "data_size": 0 00:18:03.450 } 00:18:03.450 ] 00:18:03.450 }' 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.450 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.710 [2024-09-27 22:34:59.557169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.710 BaseBdev2 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.710 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.710 [ 00:18:03.710 { 00:18:03.710 "name": "BaseBdev2", 00:18:03.710 "aliases": [ 00:18:03.710 "d5fc2b5f-ed3b-451e-a762-3a4a52ea3baa" 00:18:03.710 ], 00:18:03.710 "product_name": "Malloc disk", 00:18:03.710 "block_size": 512, 00:18:03.710 "num_blocks": 65536, 00:18:03.710 "uuid": "d5fc2b5f-ed3b-451e-a762-3a4a52ea3baa", 00:18:03.710 "assigned_rate_limits": { 00:18:03.968 "rw_ios_per_sec": 0, 00:18:03.968 "rw_mbytes_per_sec": 0, 00:18:03.968 "r_mbytes_per_sec": 0, 00:18:03.968 "w_mbytes_per_sec": 0 00:18:03.968 }, 00:18:03.968 "claimed": true, 00:18:03.968 "claim_type": "exclusive_write", 00:18:03.969 "zoned": false, 00:18:03.969 "supported_io_types": { 00:18:03.969 "read": true, 00:18:03.969 "write": true, 00:18:03.969 "unmap": true, 00:18:03.969 "flush": true, 00:18:03.969 "reset": true, 00:18:03.969 "nvme_admin": false, 00:18:03.969 "nvme_io": false, 00:18:03.969 "nvme_io_md": false, 00:18:03.969 "write_zeroes": true, 00:18:03.969 "zcopy": true, 00:18:03.969 "get_zone_info": false, 00:18:03.969 "zone_management": false, 00:18:03.969 "zone_append": false, 00:18:03.969 "compare": false, 00:18:03.969 "compare_and_write": false, 00:18:03.969 "abort": true, 00:18:03.969 "seek_hole": false, 00:18:03.969 "seek_data": false, 00:18:03.969 "copy": true, 00:18:03.969 "nvme_iov_md": false 00:18:03.969 }, 00:18:03.969 "memory_domains": [ 00:18:03.969 { 00:18:03.969 "dma_device_id": "system", 00:18:03.969 "dma_device_type": 1 00:18:03.969 }, 00:18:03.969 { 00:18:03.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.969 "dma_device_type": 2 00:18:03.969 } 00:18:03.969 ], 00:18:03.969 "driver_specific": {} 00:18:03.969 } 00:18:03.969 ] 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.969 "name": "Existed_Raid", 00:18:03.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.969 "strip_size_kb": 64, 00:18:03.969 "state": "configuring", 00:18:03.969 "raid_level": "raid5f", 00:18:03.969 "superblock": false, 00:18:03.969 "num_base_bdevs": 3, 00:18:03.969 "num_base_bdevs_discovered": 2, 00:18:03.969 "num_base_bdevs_operational": 3, 00:18:03.969 "base_bdevs_list": [ 00:18:03.969 { 00:18:03.969 "name": "BaseBdev1", 00:18:03.969 "uuid": "4cad1857-d3e3-43e5-8791-a8ba97ff27a0", 00:18:03.969 "is_configured": true, 00:18:03.969 "data_offset": 0, 00:18:03.969 "data_size": 65536 00:18:03.969 }, 00:18:03.969 { 00:18:03.969 "name": "BaseBdev2", 00:18:03.969 "uuid": "d5fc2b5f-ed3b-451e-a762-3a4a52ea3baa", 00:18:03.969 "is_configured": true, 00:18:03.969 "data_offset": 0, 00:18:03.969 "data_size": 65536 00:18:03.969 }, 00:18:03.969 { 00:18:03.969 "name": "BaseBdev3", 00:18:03.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.969 "is_configured": false, 00:18:03.969 "data_offset": 0, 00:18:03.969 "data_size": 0 00:18:03.969 } 00:18:03.969 ] 00:18:03.969 }' 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.969 22:34:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.228 [2024-09-27 22:35:00.082101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.228 [2024-09-27 22:35:00.082197] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:04.228 [2024-09-27 22:35:00.082219] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:04.228 [2024-09-27 22:35:00.082495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:04.228 [2024-09-27 22:35:00.089951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:04.228 [2024-09-27 22:35:00.090006] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:04.228 BaseBdev3 00:18:04.228 [2024-09-27 22:35:00.090345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.228 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 [ 00:18:04.487 { 00:18:04.487 "name": "BaseBdev3", 00:18:04.487 "aliases": [ 00:18:04.487 "a499cb17-634f-4fad-b99e-a47c9e5d47dc" 00:18:04.487 ], 00:18:04.487 "product_name": "Malloc disk", 00:18:04.487 "block_size": 512, 00:18:04.487 "num_blocks": 65536, 00:18:04.487 "uuid": "a499cb17-634f-4fad-b99e-a47c9e5d47dc", 00:18:04.487 "assigned_rate_limits": { 00:18:04.487 "rw_ios_per_sec": 0, 00:18:04.487 "rw_mbytes_per_sec": 0, 00:18:04.487 "r_mbytes_per_sec": 0, 00:18:04.487 "w_mbytes_per_sec": 0 00:18:04.487 }, 00:18:04.487 "claimed": true, 00:18:04.487 "claim_type": "exclusive_write", 00:18:04.487 "zoned": false, 00:18:04.487 "supported_io_types": { 00:18:04.487 "read": true, 00:18:04.487 "write": true, 00:18:04.487 "unmap": true, 00:18:04.487 "flush": true, 00:18:04.487 "reset": true, 00:18:04.487 "nvme_admin": false, 00:18:04.487 "nvme_io": false, 00:18:04.487 "nvme_io_md": false, 00:18:04.487 "write_zeroes": true, 00:18:04.487 "zcopy": true, 00:18:04.487 "get_zone_info": false, 00:18:04.487 "zone_management": false, 00:18:04.487 "zone_append": false, 00:18:04.487 "compare": false, 00:18:04.487 "compare_and_write": false, 00:18:04.487 "abort": true, 00:18:04.487 "seek_hole": false, 00:18:04.487 "seek_data": false, 00:18:04.487 "copy": true, 00:18:04.487 "nvme_iov_md": false 00:18:04.487 }, 00:18:04.487 "memory_domains": [ 00:18:04.487 { 00:18:04.487 "dma_device_id": "system", 00:18:04.487 "dma_device_type": 1 00:18:04.487 }, 00:18:04.487 { 00:18:04.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.487 "dma_device_type": 2 00:18:04.487 } 00:18:04.487 ], 00:18:04.487 "driver_specific": {} 00:18:04.487 } 00:18:04.487 ] 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.487 "name": "Existed_Raid", 00:18:04.487 "uuid": "d36b81d8-f4f2-47cf-90e9-a942cca8bded", 00:18:04.487 "strip_size_kb": 64, 00:18:04.487 "state": "online", 00:18:04.487 "raid_level": "raid5f", 00:18:04.487 "superblock": false, 00:18:04.487 "num_base_bdevs": 3, 00:18:04.487 "num_base_bdevs_discovered": 3, 00:18:04.487 "num_base_bdevs_operational": 3, 00:18:04.487 "base_bdevs_list": [ 00:18:04.487 { 00:18:04.487 "name": "BaseBdev1", 00:18:04.487 "uuid": "4cad1857-d3e3-43e5-8791-a8ba97ff27a0", 00:18:04.487 "is_configured": true, 00:18:04.487 "data_offset": 0, 00:18:04.487 "data_size": 65536 00:18:04.487 }, 00:18:04.487 { 00:18:04.487 "name": "BaseBdev2", 00:18:04.487 "uuid": "d5fc2b5f-ed3b-451e-a762-3a4a52ea3baa", 00:18:04.487 "is_configured": true, 00:18:04.487 "data_offset": 0, 00:18:04.487 "data_size": 65536 00:18:04.487 }, 00:18:04.487 { 00:18:04.487 "name": "BaseBdev3", 00:18:04.487 "uuid": "a499cb17-634f-4fad-b99e-a47c9e5d47dc", 00:18:04.487 "is_configured": true, 00:18:04.487 "data_offset": 0, 00:18:04.487 "data_size": 65536 00:18:04.487 } 00:18:04.487 ] 00:18:04.487 }' 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.487 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.747 [2024-09-27 22:35:00.576305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.747 "name": "Existed_Raid", 00:18:04.747 "aliases": [ 00:18:04.747 "d36b81d8-f4f2-47cf-90e9-a942cca8bded" 00:18:04.747 ], 00:18:04.747 "product_name": "Raid Volume", 00:18:04.747 "block_size": 512, 00:18:04.747 "num_blocks": 131072, 00:18:04.747 "uuid": "d36b81d8-f4f2-47cf-90e9-a942cca8bded", 00:18:04.747 "assigned_rate_limits": { 00:18:04.747 "rw_ios_per_sec": 0, 00:18:04.747 "rw_mbytes_per_sec": 0, 00:18:04.747 "r_mbytes_per_sec": 0, 00:18:04.747 "w_mbytes_per_sec": 0 00:18:04.747 }, 00:18:04.747 "claimed": false, 00:18:04.747 "zoned": false, 00:18:04.747 "supported_io_types": { 00:18:04.747 "read": true, 00:18:04.747 "write": true, 00:18:04.747 "unmap": false, 00:18:04.747 "flush": false, 00:18:04.747 "reset": true, 00:18:04.747 "nvme_admin": false, 00:18:04.747 "nvme_io": false, 00:18:04.747 "nvme_io_md": false, 00:18:04.747 "write_zeroes": true, 00:18:04.747 "zcopy": false, 00:18:04.747 "get_zone_info": false, 00:18:04.747 "zone_management": false, 00:18:04.747 "zone_append": false, 00:18:04.747 "compare": false, 00:18:04.747 "compare_and_write": false, 00:18:04.747 "abort": false, 00:18:04.747 "seek_hole": false, 00:18:04.747 "seek_data": false, 00:18:04.747 "copy": false, 00:18:04.747 "nvme_iov_md": false 00:18:04.747 }, 00:18:04.747 "driver_specific": { 00:18:04.747 "raid": { 00:18:04.747 "uuid": "d36b81d8-f4f2-47cf-90e9-a942cca8bded", 00:18:04.747 "strip_size_kb": 64, 00:18:04.747 "state": "online", 00:18:04.747 "raid_level": "raid5f", 00:18:04.747 "superblock": false, 00:18:04.747 "num_base_bdevs": 3, 00:18:04.747 "num_base_bdevs_discovered": 3, 00:18:04.747 "num_base_bdevs_operational": 3, 00:18:04.747 "base_bdevs_list": [ 00:18:04.747 { 00:18:04.747 "name": "BaseBdev1", 00:18:04.747 "uuid": "4cad1857-d3e3-43e5-8791-a8ba97ff27a0", 00:18:04.747 "is_configured": true, 00:18:04.747 "data_offset": 0, 00:18:04.747 "data_size": 65536 00:18:04.747 }, 00:18:04.747 { 00:18:04.747 "name": "BaseBdev2", 00:18:04.747 "uuid": "d5fc2b5f-ed3b-451e-a762-3a4a52ea3baa", 00:18:04.747 "is_configured": true, 00:18:04.747 "data_offset": 0, 00:18:04.747 "data_size": 65536 00:18:04.747 }, 00:18:04.747 { 00:18:04.747 "name": "BaseBdev3", 00:18:04.747 "uuid": "a499cb17-634f-4fad-b99e-a47c9e5d47dc", 00:18:04.747 "is_configured": true, 00:18:04.747 "data_offset": 0, 00:18:04.747 "data_size": 65536 00:18:04.747 } 00:18:04.747 ] 00:18:04.747 } 00:18:04.747 } 00:18:04.747 }' 00:18:04.747 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:05.007 BaseBdev2 00:18:05.007 BaseBdev3' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.007 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.007 [2024-09-27 22:35:00.872122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.274 22:35:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.274 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.274 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.274 "name": "Existed_Raid", 00:18:05.274 "uuid": "d36b81d8-f4f2-47cf-90e9-a942cca8bded", 00:18:05.274 "strip_size_kb": 64, 00:18:05.274 "state": "online", 00:18:05.274 "raid_level": "raid5f", 00:18:05.274 "superblock": false, 00:18:05.274 "num_base_bdevs": 3, 00:18:05.274 "num_base_bdevs_discovered": 2, 00:18:05.274 "num_base_bdevs_operational": 2, 00:18:05.274 "base_bdevs_list": [ 00:18:05.274 { 00:18:05.274 "name": null, 00:18:05.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.274 "is_configured": false, 00:18:05.274 "data_offset": 0, 00:18:05.274 "data_size": 65536 00:18:05.274 }, 00:18:05.274 { 00:18:05.274 "name": "BaseBdev2", 00:18:05.274 "uuid": "d5fc2b5f-ed3b-451e-a762-3a4a52ea3baa", 00:18:05.274 "is_configured": true, 00:18:05.274 "data_offset": 0, 00:18:05.274 "data_size": 65536 00:18:05.274 }, 00:18:05.274 { 00:18:05.274 "name": "BaseBdev3", 00:18:05.274 "uuid": "a499cb17-634f-4fad-b99e-a47c9e5d47dc", 00:18:05.274 "is_configured": true, 00:18:05.274 "data_offset": 0, 00:18:05.274 "data_size": 65536 00:18:05.274 } 00:18:05.274 ] 00:18:05.274 }' 00:18:05.274 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.274 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.855 [2024-09-27 22:35:01.492172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:05.855 [2024-09-27 22:35:01.492278] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.855 [2024-09-27 22:35:01.598916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.855 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.855 [2024-09-27 22:35:01.654862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:05.855 [2024-09-27 22:35:01.654928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 BaseBdev2 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 [ 00:18:06.116 { 00:18:06.116 "name": "BaseBdev2", 00:18:06.116 "aliases": [ 00:18:06.116 "cff153a2-0da4-4be9-bd4b-88771bdfd073" 00:18:06.116 ], 00:18:06.116 "product_name": "Malloc disk", 00:18:06.116 "block_size": 512, 00:18:06.116 "num_blocks": 65536, 00:18:06.116 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:06.116 "assigned_rate_limits": { 00:18:06.116 "rw_ios_per_sec": 0, 00:18:06.116 "rw_mbytes_per_sec": 0, 00:18:06.116 "r_mbytes_per_sec": 0, 00:18:06.116 "w_mbytes_per_sec": 0 00:18:06.116 }, 00:18:06.116 "claimed": false, 00:18:06.116 "zoned": false, 00:18:06.116 "supported_io_types": { 00:18:06.116 "read": true, 00:18:06.116 "write": true, 00:18:06.116 "unmap": true, 00:18:06.116 "flush": true, 00:18:06.116 "reset": true, 00:18:06.116 "nvme_admin": false, 00:18:06.116 "nvme_io": false, 00:18:06.116 "nvme_io_md": false, 00:18:06.116 "write_zeroes": true, 00:18:06.116 "zcopy": true, 00:18:06.116 "get_zone_info": false, 00:18:06.116 "zone_management": false, 00:18:06.116 "zone_append": false, 00:18:06.116 "compare": false, 00:18:06.116 "compare_and_write": false, 00:18:06.116 "abort": true, 00:18:06.116 "seek_hole": false, 00:18:06.116 "seek_data": false, 00:18:06.116 "copy": true, 00:18:06.116 "nvme_iov_md": false 00:18:06.116 }, 00:18:06.116 "memory_domains": [ 00:18:06.116 { 00:18:06.116 "dma_device_id": "system", 00:18:06.116 "dma_device_type": 1 00:18:06.116 }, 00:18:06.116 { 00:18:06.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.116 "dma_device_type": 2 00:18:06.116 } 00:18:06.116 ], 00:18:06.116 "driver_specific": {} 00:18:06.116 } 00:18:06.116 ] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 BaseBdev3 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.116 22:35:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.116 [ 00:18:06.116 { 00:18:06.116 "name": "BaseBdev3", 00:18:06.116 "aliases": [ 00:18:06.116 "14bf54f6-d2d1-4263-8fa3-b24029a3f374" 00:18:06.116 ], 00:18:06.116 "product_name": "Malloc disk", 00:18:06.116 "block_size": 512, 00:18:06.116 "num_blocks": 65536, 00:18:06.116 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:06.116 "assigned_rate_limits": { 00:18:06.116 "rw_ios_per_sec": 0, 00:18:06.375 "rw_mbytes_per_sec": 0, 00:18:06.375 "r_mbytes_per_sec": 0, 00:18:06.375 "w_mbytes_per_sec": 0 00:18:06.375 }, 00:18:06.375 "claimed": false, 00:18:06.375 "zoned": false, 00:18:06.375 "supported_io_types": { 00:18:06.375 "read": true, 00:18:06.375 "write": true, 00:18:06.375 "unmap": true, 00:18:06.375 "flush": true, 00:18:06.375 "reset": true, 00:18:06.375 "nvme_admin": false, 00:18:06.375 "nvme_io": false, 00:18:06.375 "nvme_io_md": false, 00:18:06.375 "write_zeroes": true, 00:18:06.375 "zcopy": true, 00:18:06.375 "get_zone_info": false, 00:18:06.375 "zone_management": false, 00:18:06.375 "zone_append": false, 00:18:06.375 "compare": false, 00:18:06.375 "compare_and_write": false, 00:18:06.375 "abort": true, 00:18:06.375 "seek_hole": false, 00:18:06.375 "seek_data": false, 00:18:06.375 "copy": true, 00:18:06.375 "nvme_iov_md": false 00:18:06.375 }, 00:18:06.375 "memory_domains": [ 00:18:06.375 { 00:18:06.375 "dma_device_id": "system", 00:18:06.375 "dma_device_type": 1 00:18:06.375 }, 00:18:06.375 { 00:18:06.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.375 "dma_device_type": 2 00:18:06.375 } 00:18:06.375 ], 00:18:06.375 "driver_specific": {} 00:18:06.375 } 00:18:06.375 ] 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.375 [2024-09-27 22:35:02.018109] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.375 [2024-09-27 22:35:02.018176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.375 [2024-09-27 22:35:02.018208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.375 [2024-09-27 22:35:02.020966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.375 "name": "Existed_Raid", 00:18:06.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.375 "strip_size_kb": 64, 00:18:06.375 "state": "configuring", 00:18:06.375 "raid_level": "raid5f", 00:18:06.375 "superblock": false, 00:18:06.375 "num_base_bdevs": 3, 00:18:06.375 "num_base_bdevs_discovered": 2, 00:18:06.375 "num_base_bdevs_operational": 3, 00:18:06.375 "base_bdevs_list": [ 00:18:06.375 { 00:18:06.375 "name": "BaseBdev1", 00:18:06.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.375 "is_configured": false, 00:18:06.375 "data_offset": 0, 00:18:06.375 "data_size": 0 00:18:06.375 }, 00:18:06.375 { 00:18:06.375 "name": "BaseBdev2", 00:18:06.375 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:06.375 "is_configured": true, 00:18:06.375 "data_offset": 0, 00:18:06.375 "data_size": 65536 00:18:06.375 }, 00:18:06.375 { 00:18:06.375 "name": "BaseBdev3", 00:18:06.375 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:06.375 "is_configured": true, 00:18:06.375 "data_offset": 0, 00:18:06.375 "data_size": 65536 00:18:06.375 } 00:18:06.375 ] 00:18:06.375 }' 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.375 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.634 [2024-09-27 22:35:02.425452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.634 "name": "Existed_Raid", 00:18:06.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.634 "strip_size_kb": 64, 00:18:06.634 "state": "configuring", 00:18:06.634 "raid_level": "raid5f", 00:18:06.634 "superblock": false, 00:18:06.634 "num_base_bdevs": 3, 00:18:06.634 "num_base_bdevs_discovered": 1, 00:18:06.634 "num_base_bdevs_operational": 3, 00:18:06.634 "base_bdevs_list": [ 00:18:06.634 { 00:18:06.634 "name": "BaseBdev1", 00:18:06.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.634 "is_configured": false, 00:18:06.634 "data_offset": 0, 00:18:06.634 "data_size": 0 00:18:06.634 }, 00:18:06.634 { 00:18:06.634 "name": null, 00:18:06.634 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:06.634 "is_configured": false, 00:18:06.634 "data_offset": 0, 00:18:06.634 "data_size": 65536 00:18:06.634 }, 00:18:06.634 { 00:18:06.634 "name": "BaseBdev3", 00:18:06.634 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:06.634 "is_configured": true, 00:18:06.634 "data_offset": 0, 00:18:06.634 "data_size": 65536 00:18:06.634 } 00:18:06.634 ] 00:18:06.634 }' 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.634 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.203 [2024-09-27 22:35:02.903009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.203 BaseBdev1 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.203 [ 00:18:07.203 { 00:18:07.203 "name": "BaseBdev1", 00:18:07.203 "aliases": [ 00:18:07.203 "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed" 00:18:07.203 ], 00:18:07.203 "product_name": "Malloc disk", 00:18:07.203 "block_size": 512, 00:18:07.203 "num_blocks": 65536, 00:18:07.203 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:07.203 "assigned_rate_limits": { 00:18:07.203 "rw_ios_per_sec": 0, 00:18:07.203 "rw_mbytes_per_sec": 0, 00:18:07.203 "r_mbytes_per_sec": 0, 00:18:07.203 "w_mbytes_per_sec": 0 00:18:07.203 }, 00:18:07.203 "claimed": true, 00:18:07.203 "claim_type": "exclusive_write", 00:18:07.203 "zoned": false, 00:18:07.203 "supported_io_types": { 00:18:07.203 "read": true, 00:18:07.203 "write": true, 00:18:07.203 "unmap": true, 00:18:07.203 "flush": true, 00:18:07.203 "reset": true, 00:18:07.203 "nvme_admin": false, 00:18:07.203 "nvme_io": false, 00:18:07.203 "nvme_io_md": false, 00:18:07.203 "write_zeroes": true, 00:18:07.203 "zcopy": true, 00:18:07.203 "get_zone_info": false, 00:18:07.203 "zone_management": false, 00:18:07.203 "zone_append": false, 00:18:07.203 "compare": false, 00:18:07.203 "compare_and_write": false, 00:18:07.203 "abort": true, 00:18:07.203 "seek_hole": false, 00:18:07.203 "seek_data": false, 00:18:07.203 "copy": true, 00:18:07.203 "nvme_iov_md": false 00:18:07.203 }, 00:18:07.203 "memory_domains": [ 00:18:07.203 { 00:18:07.203 "dma_device_id": "system", 00:18:07.203 "dma_device_type": 1 00:18:07.203 }, 00:18:07.203 { 00:18:07.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.203 "dma_device_type": 2 00:18:07.203 } 00:18:07.203 ], 00:18:07.203 "driver_specific": {} 00:18:07.203 } 00:18:07.203 ] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.203 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.204 22:35:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.204 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.204 "name": "Existed_Raid", 00:18:07.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.204 "strip_size_kb": 64, 00:18:07.204 "state": "configuring", 00:18:07.204 "raid_level": "raid5f", 00:18:07.204 "superblock": false, 00:18:07.204 "num_base_bdevs": 3, 00:18:07.204 "num_base_bdevs_discovered": 2, 00:18:07.204 "num_base_bdevs_operational": 3, 00:18:07.204 "base_bdevs_list": [ 00:18:07.204 { 00:18:07.204 "name": "BaseBdev1", 00:18:07.204 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:07.204 "is_configured": true, 00:18:07.204 "data_offset": 0, 00:18:07.204 "data_size": 65536 00:18:07.204 }, 00:18:07.204 { 00:18:07.204 "name": null, 00:18:07.204 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:07.204 "is_configured": false, 00:18:07.204 "data_offset": 0, 00:18:07.204 "data_size": 65536 00:18:07.204 }, 00:18:07.204 { 00:18:07.204 "name": "BaseBdev3", 00:18:07.204 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:07.204 "is_configured": true, 00:18:07.204 "data_offset": 0, 00:18:07.204 "data_size": 65536 00:18:07.204 } 00:18:07.204 ] 00:18:07.204 }' 00:18:07.204 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.204 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.771 [2024-09-27 22:35:03.430428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.771 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.771 "name": "Existed_Raid", 00:18:07.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.771 "strip_size_kb": 64, 00:18:07.771 "state": "configuring", 00:18:07.771 "raid_level": "raid5f", 00:18:07.771 "superblock": false, 00:18:07.771 "num_base_bdevs": 3, 00:18:07.771 "num_base_bdevs_discovered": 1, 00:18:07.771 "num_base_bdevs_operational": 3, 00:18:07.771 "base_bdevs_list": [ 00:18:07.771 { 00:18:07.771 "name": "BaseBdev1", 00:18:07.771 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:07.771 "is_configured": true, 00:18:07.771 "data_offset": 0, 00:18:07.771 "data_size": 65536 00:18:07.771 }, 00:18:07.771 { 00:18:07.771 "name": null, 00:18:07.771 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:07.771 "is_configured": false, 00:18:07.771 "data_offset": 0, 00:18:07.771 "data_size": 65536 00:18:07.771 }, 00:18:07.771 { 00:18:07.771 "name": null, 00:18:07.771 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:07.771 "is_configured": false, 00:18:07.771 "data_offset": 0, 00:18:07.771 "data_size": 65536 00:18:07.771 } 00:18:07.771 ] 00:18:07.771 }' 00:18:07.772 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.772 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.030 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:08.030 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.030 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.030 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.030 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.030 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:08.031 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:08.031 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.031 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.290 [2024-09-27 22:35:03.909792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.290 "name": "Existed_Raid", 00:18:08.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.290 "strip_size_kb": 64, 00:18:08.290 "state": "configuring", 00:18:08.290 "raid_level": "raid5f", 00:18:08.290 "superblock": false, 00:18:08.290 "num_base_bdevs": 3, 00:18:08.290 "num_base_bdevs_discovered": 2, 00:18:08.290 "num_base_bdevs_operational": 3, 00:18:08.290 "base_bdevs_list": [ 00:18:08.290 { 00:18:08.290 "name": "BaseBdev1", 00:18:08.290 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:08.290 "is_configured": true, 00:18:08.290 "data_offset": 0, 00:18:08.290 "data_size": 65536 00:18:08.290 }, 00:18:08.290 { 00:18:08.290 "name": null, 00:18:08.290 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:08.290 "is_configured": false, 00:18:08.290 "data_offset": 0, 00:18:08.290 "data_size": 65536 00:18:08.290 }, 00:18:08.290 { 00:18:08.290 "name": "BaseBdev3", 00:18:08.290 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:08.290 "is_configured": true, 00:18:08.290 "data_offset": 0, 00:18:08.290 "data_size": 65536 00:18:08.290 } 00:18:08.290 ] 00:18:08.290 }' 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.290 22:35:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.549 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.549 [2024-09-27 22:35:04.409176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.808 "name": "Existed_Raid", 00:18:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.808 "strip_size_kb": 64, 00:18:08.808 "state": "configuring", 00:18:08.808 "raid_level": "raid5f", 00:18:08.808 "superblock": false, 00:18:08.808 "num_base_bdevs": 3, 00:18:08.808 "num_base_bdevs_discovered": 1, 00:18:08.808 "num_base_bdevs_operational": 3, 00:18:08.808 "base_bdevs_list": [ 00:18:08.808 { 00:18:08.808 "name": null, 00:18:08.808 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:08.808 "is_configured": false, 00:18:08.808 "data_offset": 0, 00:18:08.808 "data_size": 65536 00:18:08.808 }, 00:18:08.808 { 00:18:08.808 "name": null, 00:18:08.808 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:08.808 "is_configured": false, 00:18:08.808 "data_offset": 0, 00:18:08.808 "data_size": 65536 00:18:08.808 }, 00:18:08.808 { 00:18:08.808 "name": "BaseBdev3", 00:18:08.808 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:08.808 "is_configured": true, 00:18:08.808 "data_offset": 0, 00:18:08.808 "data_size": 65536 00:18:08.808 } 00:18:08.808 ] 00:18:08.808 }' 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.808 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.376 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.376 22:35:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:09.376 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.376 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.376 22:35:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.376 [2024-09-27 22:35:05.013198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.376 "name": "Existed_Raid", 00:18:09.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.376 "strip_size_kb": 64, 00:18:09.376 "state": "configuring", 00:18:09.376 "raid_level": "raid5f", 00:18:09.376 "superblock": false, 00:18:09.376 "num_base_bdevs": 3, 00:18:09.376 "num_base_bdevs_discovered": 2, 00:18:09.376 "num_base_bdevs_operational": 3, 00:18:09.376 "base_bdevs_list": [ 00:18:09.376 { 00:18:09.376 "name": null, 00:18:09.376 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:09.376 "is_configured": false, 00:18:09.376 "data_offset": 0, 00:18:09.376 "data_size": 65536 00:18:09.376 }, 00:18:09.376 { 00:18:09.376 "name": "BaseBdev2", 00:18:09.376 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:09.376 "is_configured": true, 00:18:09.376 "data_offset": 0, 00:18:09.376 "data_size": 65536 00:18:09.376 }, 00:18:09.376 { 00:18:09.376 "name": "BaseBdev3", 00:18:09.376 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:09.376 "is_configured": true, 00:18:09.376 "data_offset": 0, 00:18:09.376 "data_size": 65536 00:18:09.376 } 00:18:09.376 ] 00:18:09.376 }' 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.376 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.635 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:09.635 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.635 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.635 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8cc26ecf-7b12-4e00-8a5f-682239fcd6ed 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.895 [2024-09-27 22:35:05.628083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:09.895 [2024-09-27 22:35:05.628403] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:09.895 [2024-09-27 22:35:05.628434] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:09.895 [2024-09-27 22:35:05.628852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:09.895 [2024-09-27 22:35:05.636498] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:09.895 [2024-09-27 22:35:05.636547] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:09.895 [2024-09-27 22:35:05.636947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.895 NewBaseBdev 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.895 [ 00:18:09.895 { 00:18:09.895 "name": "NewBaseBdev", 00:18:09.895 "aliases": [ 00:18:09.895 "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed" 00:18:09.895 ], 00:18:09.895 "product_name": "Malloc disk", 00:18:09.895 "block_size": 512, 00:18:09.895 "num_blocks": 65536, 00:18:09.895 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:09.895 "assigned_rate_limits": { 00:18:09.895 "rw_ios_per_sec": 0, 00:18:09.895 "rw_mbytes_per_sec": 0, 00:18:09.895 "r_mbytes_per_sec": 0, 00:18:09.895 "w_mbytes_per_sec": 0 00:18:09.895 }, 00:18:09.895 "claimed": true, 00:18:09.895 "claim_type": "exclusive_write", 00:18:09.895 "zoned": false, 00:18:09.895 "supported_io_types": { 00:18:09.895 "read": true, 00:18:09.895 "write": true, 00:18:09.895 "unmap": true, 00:18:09.895 "flush": true, 00:18:09.895 "reset": true, 00:18:09.895 "nvme_admin": false, 00:18:09.895 "nvme_io": false, 00:18:09.895 "nvme_io_md": false, 00:18:09.895 "write_zeroes": true, 00:18:09.895 "zcopy": true, 00:18:09.895 "get_zone_info": false, 00:18:09.895 "zone_management": false, 00:18:09.895 "zone_append": false, 00:18:09.895 "compare": false, 00:18:09.895 "compare_and_write": false, 00:18:09.895 "abort": true, 00:18:09.895 "seek_hole": false, 00:18:09.895 "seek_data": false, 00:18:09.895 "copy": true, 00:18:09.895 "nvme_iov_md": false 00:18:09.895 }, 00:18:09.895 "memory_domains": [ 00:18:09.895 { 00:18:09.895 "dma_device_id": "system", 00:18:09.895 "dma_device_type": 1 00:18:09.895 }, 00:18:09.895 { 00:18:09.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.895 "dma_device_type": 2 00:18:09.895 } 00:18:09.895 ], 00:18:09.895 "driver_specific": {} 00:18:09.895 } 00:18:09.895 ] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.895 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.895 "name": "Existed_Raid", 00:18:09.896 "uuid": "91d3c13f-dbeb-4f28-a5d3-beb0bb167930", 00:18:09.896 "strip_size_kb": 64, 00:18:09.896 "state": "online", 00:18:09.896 "raid_level": "raid5f", 00:18:09.896 "superblock": false, 00:18:09.896 "num_base_bdevs": 3, 00:18:09.896 "num_base_bdevs_discovered": 3, 00:18:09.896 "num_base_bdevs_operational": 3, 00:18:09.896 "base_bdevs_list": [ 00:18:09.896 { 00:18:09.896 "name": "NewBaseBdev", 00:18:09.896 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:09.896 "is_configured": true, 00:18:09.896 "data_offset": 0, 00:18:09.896 "data_size": 65536 00:18:09.896 }, 00:18:09.896 { 00:18:09.896 "name": "BaseBdev2", 00:18:09.896 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:09.896 "is_configured": true, 00:18:09.896 "data_offset": 0, 00:18:09.896 "data_size": 65536 00:18:09.896 }, 00:18:09.896 { 00:18:09.896 "name": "BaseBdev3", 00:18:09.896 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:09.896 "is_configured": true, 00:18:09.896 "data_offset": 0, 00:18:09.896 "data_size": 65536 00:18:09.896 } 00:18:09.896 ] 00:18:09.896 }' 00:18:09.896 22:35:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.896 22:35:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.467 [2024-09-27 22:35:06.122986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.467 "name": "Existed_Raid", 00:18:10.467 "aliases": [ 00:18:10.467 "91d3c13f-dbeb-4f28-a5d3-beb0bb167930" 00:18:10.467 ], 00:18:10.467 "product_name": "Raid Volume", 00:18:10.467 "block_size": 512, 00:18:10.467 "num_blocks": 131072, 00:18:10.467 "uuid": "91d3c13f-dbeb-4f28-a5d3-beb0bb167930", 00:18:10.467 "assigned_rate_limits": { 00:18:10.467 "rw_ios_per_sec": 0, 00:18:10.467 "rw_mbytes_per_sec": 0, 00:18:10.467 "r_mbytes_per_sec": 0, 00:18:10.467 "w_mbytes_per_sec": 0 00:18:10.467 }, 00:18:10.467 "claimed": false, 00:18:10.467 "zoned": false, 00:18:10.467 "supported_io_types": { 00:18:10.467 "read": true, 00:18:10.467 "write": true, 00:18:10.467 "unmap": false, 00:18:10.467 "flush": false, 00:18:10.467 "reset": true, 00:18:10.467 "nvme_admin": false, 00:18:10.467 "nvme_io": false, 00:18:10.467 "nvme_io_md": false, 00:18:10.467 "write_zeroes": true, 00:18:10.467 "zcopy": false, 00:18:10.467 "get_zone_info": false, 00:18:10.467 "zone_management": false, 00:18:10.467 "zone_append": false, 00:18:10.467 "compare": false, 00:18:10.467 "compare_and_write": false, 00:18:10.467 "abort": false, 00:18:10.467 "seek_hole": false, 00:18:10.467 "seek_data": false, 00:18:10.467 "copy": false, 00:18:10.467 "nvme_iov_md": false 00:18:10.467 }, 00:18:10.467 "driver_specific": { 00:18:10.467 "raid": { 00:18:10.467 "uuid": "91d3c13f-dbeb-4f28-a5d3-beb0bb167930", 00:18:10.467 "strip_size_kb": 64, 00:18:10.467 "state": "online", 00:18:10.467 "raid_level": "raid5f", 00:18:10.467 "superblock": false, 00:18:10.467 "num_base_bdevs": 3, 00:18:10.467 "num_base_bdevs_discovered": 3, 00:18:10.467 "num_base_bdevs_operational": 3, 00:18:10.467 "base_bdevs_list": [ 00:18:10.467 { 00:18:10.467 "name": "NewBaseBdev", 00:18:10.467 "uuid": "8cc26ecf-7b12-4e00-8a5f-682239fcd6ed", 00:18:10.467 "is_configured": true, 00:18:10.467 "data_offset": 0, 00:18:10.467 "data_size": 65536 00:18:10.467 }, 00:18:10.467 { 00:18:10.467 "name": "BaseBdev2", 00:18:10.467 "uuid": "cff153a2-0da4-4be9-bd4b-88771bdfd073", 00:18:10.467 "is_configured": true, 00:18:10.467 "data_offset": 0, 00:18:10.467 "data_size": 65536 00:18:10.467 }, 00:18:10.467 { 00:18:10.467 "name": "BaseBdev3", 00:18:10.467 "uuid": "14bf54f6-d2d1-4263-8fa3-b24029a3f374", 00:18:10.467 "is_configured": true, 00:18:10.467 "data_offset": 0, 00:18:10.467 "data_size": 65536 00:18:10.467 } 00:18:10.467 ] 00:18:10.467 } 00:18:10.467 } 00:18:10.467 }' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:10.467 BaseBdev2 00:18:10.467 BaseBdev3' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.467 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.468 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.468 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.728 [2024-09-27 22:35:06.402333] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.728 [2024-09-27 22:35:06.402370] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.728 [2024-09-27 22:35:06.402470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.728 [2024-09-27 22:35:06.402799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.728 [2024-09-27 22:35:06.402816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80924 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80924 ']' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80924 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80924 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:10.728 killing process with pid 80924 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80924' 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80924 00:18:10.728 [2024-09-27 22:35:06.458891] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.728 22:35:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80924 00:18:10.987 [2024-09-27 22:35:06.796788] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.548 22:35:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:13.548 00:18:13.548 real 0m12.227s 00:18:13.548 user 0m18.352s 00:18:13.548 sys 0m2.361s 00:18:13.548 22:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:13.548 22:35:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.548 ************************************ 00:18:13.548 END TEST raid5f_state_function_test 00:18:13.548 ************************************ 00:18:13.548 22:35:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:18:13.548 22:35:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:13.548 22:35:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:13.548 22:35:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.548 ************************************ 00:18:13.548 START TEST raid5f_state_function_test_sb 00:18:13.548 ************************************ 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:13.548 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81562 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81562' 00:18:13.549 Process raid pid: 81562 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81562 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81562 ']' 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.549 22:35:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.549 [2024-09-27 22:35:09.148869] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:18:13.549 [2024-09-27 22:35:09.149044] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.549 [2024-09-27 22:35:09.325123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.808 [2024-09-27 22:35:09.577588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.066 [2024-09-27 22:35:09.831428] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.066 [2024-09-27 22:35:09.831486] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.663 [2024-09-27 22:35:10.335069] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.663 [2024-09-27 22:35:10.335130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.663 [2024-09-27 22:35:10.335142] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.663 [2024-09-27 22:35:10.335159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.663 [2024-09-27 22:35:10.335167] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.663 [2024-09-27 22:35:10.335180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.663 "name": "Existed_Raid", 00:18:14.663 "uuid": "0560bcdd-950a-4a7c-82ca-1ef734cfde73", 00:18:14.663 "strip_size_kb": 64, 00:18:14.663 "state": "configuring", 00:18:14.663 "raid_level": "raid5f", 00:18:14.663 "superblock": true, 00:18:14.663 "num_base_bdevs": 3, 00:18:14.663 "num_base_bdevs_discovered": 0, 00:18:14.663 "num_base_bdevs_operational": 3, 00:18:14.663 "base_bdevs_list": [ 00:18:14.663 { 00:18:14.663 "name": "BaseBdev1", 00:18:14.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.663 "is_configured": false, 00:18:14.663 "data_offset": 0, 00:18:14.663 "data_size": 0 00:18:14.663 }, 00:18:14.663 { 00:18:14.663 "name": "BaseBdev2", 00:18:14.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.663 "is_configured": false, 00:18:14.663 "data_offset": 0, 00:18:14.663 "data_size": 0 00:18:14.663 }, 00:18:14.663 { 00:18:14.663 "name": "BaseBdev3", 00:18:14.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.663 "is_configured": false, 00:18:14.663 "data_offset": 0, 00:18:14.663 "data_size": 0 00:18:14.663 } 00:18:14.663 ] 00:18:14.663 }' 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.663 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 [2024-09-27 22:35:10.738404] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.935 [2024-09-27 22:35:10.738451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 [2024-09-27 22:35:10.746473] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.935 [2024-09-27 22:35:10.746530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.935 [2024-09-27 22:35:10.746540] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.935 [2024-09-27 22:35:10.746554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.935 [2024-09-27 22:35:10.746563] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.935 [2024-09-27 22:35:10.746577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 [2024-09-27 22:35:10.798702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.935 BaseBdev1 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.935 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 [ 00:18:15.195 { 00:18:15.195 "name": "BaseBdev1", 00:18:15.195 "aliases": [ 00:18:15.195 "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb" 00:18:15.195 ], 00:18:15.195 "product_name": "Malloc disk", 00:18:15.195 "block_size": 512, 00:18:15.195 "num_blocks": 65536, 00:18:15.195 "uuid": "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb", 00:18:15.195 "assigned_rate_limits": { 00:18:15.195 "rw_ios_per_sec": 0, 00:18:15.195 "rw_mbytes_per_sec": 0, 00:18:15.195 "r_mbytes_per_sec": 0, 00:18:15.195 "w_mbytes_per_sec": 0 00:18:15.195 }, 00:18:15.195 "claimed": true, 00:18:15.195 "claim_type": "exclusive_write", 00:18:15.195 "zoned": false, 00:18:15.195 "supported_io_types": { 00:18:15.195 "read": true, 00:18:15.195 "write": true, 00:18:15.195 "unmap": true, 00:18:15.195 "flush": true, 00:18:15.195 "reset": true, 00:18:15.195 "nvme_admin": false, 00:18:15.195 "nvme_io": false, 00:18:15.195 "nvme_io_md": false, 00:18:15.195 "write_zeroes": true, 00:18:15.195 "zcopy": true, 00:18:15.195 "get_zone_info": false, 00:18:15.195 "zone_management": false, 00:18:15.195 "zone_append": false, 00:18:15.195 "compare": false, 00:18:15.195 "compare_and_write": false, 00:18:15.195 "abort": true, 00:18:15.195 "seek_hole": false, 00:18:15.195 "seek_data": false, 00:18:15.195 "copy": true, 00:18:15.195 "nvme_iov_md": false 00:18:15.195 }, 00:18:15.195 "memory_domains": [ 00:18:15.195 { 00:18:15.195 "dma_device_id": "system", 00:18:15.195 "dma_device_type": 1 00:18:15.195 }, 00:18:15.195 { 00:18:15.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.195 "dma_device_type": 2 00:18:15.195 } 00:18:15.195 ], 00:18:15.195 "driver_specific": {} 00:18:15.195 } 00:18:15.195 ] 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.195 "name": "Existed_Raid", 00:18:15.195 "uuid": "84c878eb-e4eb-4072-9d40-382ef2f4d0b2", 00:18:15.195 "strip_size_kb": 64, 00:18:15.195 "state": "configuring", 00:18:15.195 "raid_level": "raid5f", 00:18:15.195 "superblock": true, 00:18:15.195 "num_base_bdevs": 3, 00:18:15.195 "num_base_bdevs_discovered": 1, 00:18:15.195 "num_base_bdevs_operational": 3, 00:18:15.195 "base_bdevs_list": [ 00:18:15.195 { 00:18:15.195 "name": "BaseBdev1", 00:18:15.195 "uuid": "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb", 00:18:15.195 "is_configured": true, 00:18:15.195 "data_offset": 2048, 00:18:15.195 "data_size": 63488 00:18:15.195 }, 00:18:15.195 { 00:18:15.195 "name": "BaseBdev2", 00:18:15.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.195 "is_configured": false, 00:18:15.195 "data_offset": 0, 00:18:15.195 "data_size": 0 00:18:15.195 }, 00:18:15.195 { 00:18:15.195 "name": "BaseBdev3", 00:18:15.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.195 "is_configured": false, 00:18:15.195 "data_offset": 0, 00:18:15.195 "data_size": 0 00:18:15.195 } 00:18:15.195 ] 00:18:15.195 }' 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.195 22:35:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 [2024-09-27 22:35:11.238192] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.456 [2024-09-27 22:35:11.238257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 [2024-09-27 22:35:11.250292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.456 [2024-09-27 22:35:11.252654] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.456 [2024-09-27 22:35:11.252888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.456 [2024-09-27 22:35:11.252911] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.456 [2024-09-27 22:35:11.252925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.456 "name": "Existed_Raid", 00:18:15.456 "uuid": "580b19b2-3d4f-4986-a72b-43f8df65c2f1", 00:18:15.456 "strip_size_kb": 64, 00:18:15.456 "state": "configuring", 00:18:15.456 "raid_level": "raid5f", 00:18:15.456 "superblock": true, 00:18:15.456 "num_base_bdevs": 3, 00:18:15.456 "num_base_bdevs_discovered": 1, 00:18:15.456 "num_base_bdevs_operational": 3, 00:18:15.456 "base_bdevs_list": [ 00:18:15.456 { 00:18:15.456 "name": "BaseBdev1", 00:18:15.456 "uuid": "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb", 00:18:15.456 "is_configured": true, 00:18:15.456 "data_offset": 2048, 00:18:15.456 "data_size": 63488 00:18:15.456 }, 00:18:15.456 { 00:18:15.456 "name": "BaseBdev2", 00:18:15.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.456 "is_configured": false, 00:18:15.456 "data_offset": 0, 00:18:15.456 "data_size": 0 00:18:15.456 }, 00:18:15.456 { 00:18:15.456 "name": "BaseBdev3", 00:18:15.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.456 "is_configured": false, 00:18:15.456 "data_offset": 0, 00:18:15.456 "data_size": 0 00:18:15.456 } 00:18:15.456 ] 00:18:15.456 }' 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.456 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 [2024-09-27 22:35:11.692784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.026 BaseBdev2 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 [ 00:18:16.026 { 00:18:16.026 "name": "BaseBdev2", 00:18:16.026 "aliases": [ 00:18:16.026 "1fafdbee-87ff-492c-aa35-66c76931ddf4" 00:18:16.026 ], 00:18:16.026 "product_name": "Malloc disk", 00:18:16.026 "block_size": 512, 00:18:16.026 "num_blocks": 65536, 00:18:16.026 "uuid": "1fafdbee-87ff-492c-aa35-66c76931ddf4", 00:18:16.026 "assigned_rate_limits": { 00:18:16.026 "rw_ios_per_sec": 0, 00:18:16.026 "rw_mbytes_per_sec": 0, 00:18:16.026 "r_mbytes_per_sec": 0, 00:18:16.026 "w_mbytes_per_sec": 0 00:18:16.026 }, 00:18:16.026 "claimed": true, 00:18:16.026 "claim_type": "exclusive_write", 00:18:16.026 "zoned": false, 00:18:16.026 "supported_io_types": { 00:18:16.026 "read": true, 00:18:16.026 "write": true, 00:18:16.026 "unmap": true, 00:18:16.026 "flush": true, 00:18:16.026 "reset": true, 00:18:16.026 "nvme_admin": false, 00:18:16.026 "nvme_io": false, 00:18:16.026 "nvme_io_md": false, 00:18:16.026 "write_zeroes": true, 00:18:16.026 "zcopy": true, 00:18:16.026 "get_zone_info": false, 00:18:16.026 "zone_management": false, 00:18:16.026 "zone_append": false, 00:18:16.026 "compare": false, 00:18:16.026 "compare_and_write": false, 00:18:16.026 "abort": true, 00:18:16.026 "seek_hole": false, 00:18:16.026 "seek_data": false, 00:18:16.026 "copy": true, 00:18:16.026 "nvme_iov_md": false 00:18:16.026 }, 00:18:16.026 "memory_domains": [ 00:18:16.026 { 00:18:16.026 "dma_device_id": "system", 00:18:16.026 "dma_device_type": 1 00:18:16.026 }, 00:18:16.026 { 00:18:16.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.026 "dma_device_type": 2 00:18:16.026 } 00:18:16.026 ], 00:18:16.026 "driver_specific": {} 00:18:16.026 } 00:18:16.026 ] 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.026 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.027 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.027 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.027 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.027 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.027 "name": "Existed_Raid", 00:18:16.027 "uuid": "580b19b2-3d4f-4986-a72b-43f8df65c2f1", 00:18:16.027 "strip_size_kb": 64, 00:18:16.027 "state": "configuring", 00:18:16.027 "raid_level": "raid5f", 00:18:16.027 "superblock": true, 00:18:16.027 "num_base_bdevs": 3, 00:18:16.027 "num_base_bdevs_discovered": 2, 00:18:16.027 "num_base_bdevs_operational": 3, 00:18:16.027 "base_bdevs_list": [ 00:18:16.027 { 00:18:16.027 "name": "BaseBdev1", 00:18:16.027 "uuid": "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb", 00:18:16.027 "is_configured": true, 00:18:16.027 "data_offset": 2048, 00:18:16.027 "data_size": 63488 00:18:16.027 }, 00:18:16.027 { 00:18:16.027 "name": "BaseBdev2", 00:18:16.027 "uuid": "1fafdbee-87ff-492c-aa35-66c76931ddf4", 00:18:16.027 "is_configured": true, 00:18:16.027 "data_offset": 2048, 00:18:16.027 "data_size": 63488 00:18:16.027 }, 00:18:16.027 { 00:18:16.027 "name": "BaseBdev3", 00:18:16.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.027 "is_configured": false, 00:18:16.027 "data_offset": 0, 00:18:16.027 "data_size": 0 00:18:16.027 } 00:18:16.027 ] 00:18:16.027 }' 00:18:16.027 22:35:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.027 22:35:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.286 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:16.286 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.286 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.545 [2024-09-27 22:35:12.195785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.545 [2024-09-27 22:35:12.196124] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:16.545 [2024-09-27 22:35:12.196154] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:16.545 [2024-09-27 22:35:12.196478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:16.545 BaseBdev3 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.546 [2024-09-27 22:35:12.203583] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:16.546 [2024-09-27 22:35:12.203791] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:16.546 [2024-09-27 22:35:12.204434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.546 [ 00:18:16.546 { 00:18:16.546 "name": "BaseBdev3", 00:18:16.546 "aliases": [ 00:18:16.546 "dc2e83d4-bf34-4897-9d8e-af6bd903f930" 00:18:16.546 ], 00:18:16.546 "product_name": "Malloc disk", 00:18:16.546 "block_size": 512, 00:18:16.546 "num_blocks": 65536, 00:18:16.546 "uuid": "dc2e83d4-bf34-4897-9d8e-af6bd903f930", 00:18:16.546 "assigned_rate_limits": { 00:18:16.546 "rw_ios_per_sec": 0, 00:18:16.546 "rw_mbytes_per_sec": 0, 00:18:16.546 "r_mbytes_per_sec": 0, 00:18:16.546 "w_mbytes_per_sec": 0 00:18:16.546 }, 00:18:16.546 "claimed": true, 00:18:16.546 "claim_type": "exclusive_write", 00:18:16.546 "zoned": false, 00:18:16.546 "supported_io_types": { 00:18:16.546 "read": true, 00:18:16.546 "write": true, 00:18:16.546 "unmap": true, 00:18:16.546 "flush": true, 00:18:16.546 "reset": true, 00:18:16.546 "nvme_admin": false, 00:18:16.546 "nvme_io": false, 00:18:16.546 "nvme_io_md": false, 00:18:16.546 "write_zeroes": true, 00:18:16.546 "zcopy": true, 00:18:16.546 "get_zone_info": false, 00:18:16.546 "zone_management": false, 00:18:16.546 "zone_append": false, 00:18:16.546 "compare": false, 00:18:16.546 "compare_and_write": false, 00:18:16.546 "abort": true, 00:18:16.546 "seek_hole": false, 00:18:16.546 "seek_data": false, 00:18:16.546 "copy": true, 00:18:16.546 "nvme_iov_md": false 00:18:16.546 }, 00:18:16.546 "memory_domains": [ 00:18:16.546 { 00:18:16.546 "dma_device_id": "system", 00:18:16.546 "dma_device_type": 1 00:18:16.546 }, 00:18:16.546 { 00:18:16.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.546 "dma_device_type": 2 00:18:16.546 } 00:18:16.546 ], 00:18:16.546 "driver_specific": {} 00:18:16.546 } 00:18:16.546 ] 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.546 "name": "Existed_Raid", 00:18:16.546 "uuid": "580b19b2-3d4f-4986-a72b-43f8df65c2f1", 00:18:16.546 "strip_size_kb": 64, 00:18:16.546 "state": "online", 00:18:16.546 "raid_level": "raid5f", 00:18:16.546 "superblock": true, 00:18:16.546 "num_base_bdevs": 3, 00:18:16.546 "num_base_bdevs_discovered": 3, 00:18:16.546 "num_base_bdevs_operational": 3, 00:18:16.546 "base_bdevs_list": [ 00:18:16.546 { 00:18:16.546 "name": "BaseBdev1", 00:18:16.546 "uuid": "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb", 00:18:16.546 "is_configured": true, 00:18:16.546 "data_offset": 2048, 00:18:16.546 "data_size": 63488 00:18:16.546 }, 00:18:16.546 { 00:18:16.546 "name": "BaseBdev2", 00:18:16.546 "uuid": "1fafdbee-87ff-492c-aa35-66c76931ddf4", 00:18:16.546 "is_configured": true, 00:18:16.546 "data_offset": 2048, 00:18:16.546 "data_size": 63488 00:18:16.546 }, 00:18:16.546 { 00:18:16.546 "name": "BaseBdev3", 00:18:16.546 "uuid": "dc2e83d4-bf34-4897-9d8e-af6bd903f930", 00:18:16.546 "is_configured": true, 00:18:16.546 "data_offset": 2048, 00:18:16.546 "data_size": 63488 00:18:16.546 } 00:18:16.546 ] 00:18:16.546 }' 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.546 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.115 [2024-09-27 22:35:12.730752] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.115 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.115 "name": "Existed_Raid", 00:18:17.115 "aliases": [ 00:18:17.115 "580b19b2-3d4f-4986-a72b-43f8df65c2f1" 00:18:17.115 ], 00:18:17.115 "product_name": "Raid Volume", 00:18:17.115 "block_size": 512, 00:18:17.115 "num_blocks": 126976, 00:18:17.115 "uuid": "580b19b2-3d4f-4986-a72b-43f8df65c2f1", 00:18:17.115 "assigned_rate_limits": { 00:18:17.115 "rw_ios_per_sec": 0, 00:18:17.115 "rw_mbytes_per_sec": 0, 00:18:17.115 "r_mbytes_per_sec": 0, 00:18:17.115 "w_mbytes_per_sec": 0 00:18:17.115 }, 00:18:17.115 "claimed": false, 00:18:17.115 "zoned": false, 00:18:17.115 "supported_io_types": { 00:18:17.115 "read": true, 00:18:17.115 "write": true, 00:18:17.115 "unmap": false, 00:18:17.115 "flush": false, 00:18:17.115 "reset": true, 00:18:17.115 "nvme_admin": false, 00:18:17.115 "nvme_io": false, 00:18:17.115 "nvme_io_md": false, 00:18:17.115 "write_zeroes": true, 00:18:17.115 "zcopy": false, 00:18:17.115 "get_zone_info": false, 00:18:17.115 "zone_management": false, 00:18:17.115 "zone_append": false, 00:18:17.115 "compare": false, 00:18:17.115 "compare_and_write": false, 00:18:17.115 "abort": false, 00:18:17.115 "seek_hole": false, 00:18:17.115 "seek_data": false, 00:18:17.115 "copy": false, 00:18:17.115 "nvme_iov_md": false 00:18:17.115 }, 00:18:17.115 "driver_specific": { 00:18:17.115 "raid": { 00:18:17.115 "uuid": "580b19b2-3d4f-4986-a72b-43f8df65c2f1", 00:18:17.115 "strip_size_kb": 64, 00:18:17.115 "state": "online", 00:18:17.115 "raid_level": "raid5f", 00:18:17.115 "superblock": true, 00:18:17.115 "num_base_bdevs": 3, 00:18:17.115 "num_base_bdevs_discovered": 3, 00:18:17.115 "num_base_bdevs_operational": 3, 00:18:17.115 "base_bdevs_list": [ 00:18:17.115 { 00:18:17.115 "name": "BaseBdev1", 00:18:17.115 "uuid": "f6c76313-7e77-4aaf-adf4-c3c5fb6140fb", 00:18:17.115 "is_configured": true, 00:18:17.115 "data_offset": 2048, 00:18:17.115 "data_size": 63488 00:18:17.115 }, 00:18:17.115 { 00:18:17.115 "name": "BaseBdev2", 00:18:17.116 "uuid": "1fafdbee-87ff-492c-aa35-66c76931ddf4", 00:18:17.116 "is_configured": true, 00:18:17.116 "data_offset": 2048, 00:18:17.116 "data_size": 63488 00:18:17.116 }, 00:18:17.116 { 00:18:17.116 "name": "BaseBdev3", 00:18:17.116 "uuid": "dc2e83d4-bf34-4897-9d8e-af6bd903f930", 00:18:17.116 "is_configured": true, 00:18:17.116 "data_offset": 2048, 00:18:17.116 "data_size": 63488 00:18:17.116 } 00:18:17.116 ] 00:18:17.116 } 00:18:17.116 } 00:18:17.116 }' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:17.116 BaseBdev2 00:18:17.116 BaseBdev3' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.116 22:35:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.376 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.376 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.376 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:17.376 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.376 22:35:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.376 [2024-09-27 22:35:13.054149] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.376 "name": "Existed_Raid", 00:18:17.376 "uuid": "580b19b2-3d4f-4986-a72b-43f8df65c2f1", 00:18:17.376 "strip_size_kb": 64, 00:18:17.376 "state": "online", 00:18:17.376 "raid_level": "raid5f", 00:18:17.376 "superblock": true, 00:18:17.376 "num_base_bdevs": 3, 00:18:17.376 "num_base_bdevs_discovered": 2, 00:18:17.376 "num_base_bdevs_operational": 2, 00:18:17.376 "base_bdevs_list": [ 00:18:17.376 { 00:18:17.376 "name": null, 00:18:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.376 "is_configured": false, 00:18:17.376 "data_offset": 0, 00:18:17.376 "data_size": 63488 00:18:17.376 }, 00:18:17.376 { 00:18:17.376 "name": "BaseBdev2", 00:18:17.376 "uuid": "1fafdbee-87ff-492c-aa35-66c76931ddf4", 00:18:17.376 "is_configured": true, 00:18:17.376 "data_offset": 2048, 00:18:17.376 "data_size": 63488 00:18:17.376 }, 00:18:17.376 { 00:18:17.376 "name": "BaseBdev3", 00:18:17.376 "uuid": "dc2e83d4-bf34-4897-9d8e-af6bd903f930", 00:18:17.376 "is_configured": true, 00:18:17.376 "data_offset": 2048, 00:18:17.376 "data_size": 63488 00:18:17.376 } 00:18:17.376 ] 00:18:17.376 }' 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.376 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.987 [2024-09-27 22:35:13.668112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:17.987 [2024-09-27 22:35:13.668278] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.987 [2024-09-27 22:35:13.774286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:17.987 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.988 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.988 [2024-09-27 22:35:13.826260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.988 [2024-09-27 22:35:13.826327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.247 22:35:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.247 BaseBdev2 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:18.247 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.248 [ 00:18:18.248 { 00:18:18.248 "name": "BaseBdev2", 00:18:18.248 "aliases": [ 00:18:18.248 "a97bdc12-d676-4c94-832b-83dc1c2340d4" 00:18:18.248 ], 00:18:18.248 "product_name": "Malloc disk", 00:18:18.248 "block_size": 512, 00:18:18.248 "num_blocks": 65536, 00:18:18.248 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:18.248 "assigned_rate_limits": { 00:18:18.248 "rw_ios_per_sec": 0, 00:18:18.248 "rw_mbytes_per_sec": 0, 00:18:18.248 "r_mbytes_per_sec": 0, 00:18:18.248 "w_mbytes_per_sec": 0 00:18:18.248 }, 00:18:18.248 "claimed": false, 00:18:18.248 "zoned": false, 00:18:18.248 "supported_io_types": { 00:18:18.248 "read": true, 00:18:18.248 "write": true, 00:18:18.248 "unmap": true, 00:18:18.248 "flush": true, 00:18:18.248 "reset": true, 00:18:18.248 "nvme_admin": false, 00:18:18.248 "nvme_io": false, 00:18:18.248 "nvme_io_md": false, 00:18:18.248 "write_zeroes": true, 00:18:18.248 "zcopy": true, 00:18:18.248 "get_zone_info": false, 00:18:18.248 "zone_management": false, 00:18:18.248 "zone_append": false, 00:18:18.248 "compare": false, 00:18:18.248 "compare_and_write": false, 00:18:18.248 "abort": true, 00:18:18.248 "seek_hole": false, 00:18:18.248 "seek_data": false, 00:18:18.248 "copy": true, 00:18:18.248 "nvme_iov_md": false 00:18:18.248 }, 00:18:18.248 "memory_domains": [ 00:18:18.248 { 00:18:18.248 "dma_device_id": "system", 00:18:18.248 "dma_device_type": 1 00:18:18.248 }, 00:18:18.248 { 00:18:18.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.248 "dma_device_type": 2 00:18:18.248 } 00:18:18.248 ], 00:18:18.248 "driver_specific": {} 00:18:18.248 } 00:18:18.248 ] 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.248 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.508 BaseBdev3 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.508 [ 00:18:18.508 { 00:18:18.508 "name": "BaseBdev3", 00:18:18.508 "aliases": [ 00:18:18.508 "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b" 00:18:18.508 ], 00:18:18.508 "product_name": "Malloc disk", 00:18:18.508 "block_size": 512, 00:18:18.508 "num_blocks": 65536, 00:18:18.508 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:18.508 "assigned_rate_limits": { 00:18:18.508 "rw_ios_per_sec": 0, 00:18:18.508 "rw_mbytes_per_sec": 0, 00:18:18.508 "r_mbytes_per_sec": 0, 00:18:18.508 "w_mbytes_per_sec": 0 00:18:18.508 }, 00:18:18.508 "claimed": false, 00:18:18.508 "zoned": false, 00:18:18.508 "supported_io_types": { 00:18:18.508 "read": true, 00:18:18.508 "write": true, 00:18:18.508 "unmap": true, 00:18:18.508 "flush": true, 00:18:18.508 "reset": true, 00:18:18.508 "nvme_admin": false, 00:18:18.508 "nvme_io": false, 00:18:18.508 "nvme_io_md": false, 00:18:18.508 "write_zeroes": true, 00:18:18.508 "zcopy": true, 00:18:18.508 "get_zone_info": false, 00:18:18.508 "zone_management": false, 00:18:18.508 "zone_append": false, 00:18:18.508 "compare": false, 00:18:18.508 "compare_and_write": false, 00:18:18.508 "abort": true, 00:18:18.508 "seek_hole": false, 00:18:18.508 "seek_data": false, 00:18:18.508 "copy": true, 00:18:18.508 "nvme_iov_md": false 00:18:18.508 }, 00:18:18.508 "memory_domains": [ 00:18:18.508 { 00:18:18.508 "dma_device_id": "system", 00:18:18.508 "dma_device_type": 1 00:18:18.508 }, 00:18:18.508 { 00:18:18.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.508 "dma_device_type": 2 00:18:18.508 } 00:18:18.508 ], 00:18:18.508 "driver_specific": {} 00:18:18.508 } 00:18:18.508 ] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.508 [2024-09-27 22:35:14.199687] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.508 [2024-09-27 22:35:14.199748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.508 [2024-09-27 22:35:14.199778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.508 [2024-09-27 22:35:14.202101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.508 "name": "Existed_Raid", 00:18:18.508 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:18.508 "strip_size_kb": 64, 00:18:18.508 "state": "configuring", 00:18:18.508 "raid_level": "raid5f", 00:18:18.508 "superblock": true, 00:18:18.508 "num_base_bdevs": 3, 00:18:18.508 "num_base_bdevs_discovered": 2, 00:18:18.508 "num_base_bdevs_operational": 3, 00:18:18.508 "base_bdevs_list": [ 00:18:18.508 { 00:18:18.508 "name": "BaseBdev1", 00:18:18.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.508 "is_configured": false, 00:18:18.508 "data_offset": 0, 00:18:18.508 "data_size": 0 00:18:18.508 }, 00:18:18.508 { 00:18:18.508 "name": "BaseBdev2", 00:18:18.508 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:18.508 "is_configured": true, 00:18:18.508 "data_offset": 2048, 00:18:18.508 "data_size": 63488 00:18:18.508 }, 00:18:18.508 { 00:18:18.508 "name": "BaseBdev3", 00:18:18.508 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:18.508 "is_configured": true, 00:18:18.508 "data_offset": 2048, 00:18:18.508 "data_size": 63488 00:18:18.508 } 00:18:18.508 ] 00:18:18.508 }' 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.508 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.075 [2024-09-27 22:35:14.682984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.075 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.076 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.076 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.076 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.076 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.076 "name": "Existed_Raid", 00:18:19.076 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:19.076 "strip_size_kb": 64, 00:18:19.076 "state": "configuring", 00:18:19.076 "raid_level": "raid5f", 00:18:19.076 "superblock": true, 00:18:19.076 "num_base_bdevs": 3, 00:18:19.076 "num_base_bdevs_discovered": 1, 00:18:19.076 "num_base_bdevs_operational": 3, 00:18:19.076 "base_bdevs_list": [ 00:18:19.076 { 00:18:19.076 "name": "BaseBdev1", 00:18:19.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.076 "is_configured": false, 00:18:19.076 "data_offset": 0, 00:18:19.076 "data_size": 0 00:18:19.076 }, 00:18:19.076 { 00:18:19.076 "name": null, 00:18:19.076 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:19.076 "is_configured": false, 00:18:19.076 "data_offset": 0, 00:18:19.076 "data_size": 63488 00:18:19.076 }, 00:18:19.076 { 00:18:19.076 "name": "BaseBdev3", 00:18:19.076 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:19.076 "is_configured": true, 00:18:19.076 "data_offset": 2048, 00:18:19.076 "data_size": 63488 00:18:19.076 } 00:18:19.076 ] 00:18:19.076 }' 00:18:19.076 22:35:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.076 22:35:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.335 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.595 [2024-09-27 22:35:15.228842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.595 BaseBdev1 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.595 [ 00:18:19.595 { 00:18:19.595 "name": "BaseBdev1", 00:18:19.595 "aliases": [ 00:18:19.595 "c6b98e34-cac3-4363-9c4f-5bae77c858a6" 00:18:19.595 ], 00:18:19.595 "product_name": "Malloc disk", 00:18:19.595 "block_size": 512, 00:18:19.595 "num_blocks": 65536, 00:18:19.595 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:19.595 "assigned_rate_limits": { 00:18:19.595 "rw_ios_per_sec": 0, 00:18:19.595 "rw_mbytes_per_sec": 0, 00:18:19.595 "r_mbytes_per_sec": 0, 00:18:19.595 "w_mbytes_per_sec": 0 00:18:19.595 }, 00:18:19.595 "claimed": true, 00:18:19.595 "claim_type": "exclusive_write", 00:18:19.595 "zoned": false, 00:18:19.595 "supported_io_types": { 00:18:19.595 "read": true, 00:18:19.595 "write": true, 00:18:19.595 "unmap": true, 00:18:19.595 "flush": true, 00:18:19.595 "reset": true, 00:18:19.595 "nvme_admin": false, 00:18:19.595 "nvme_io": false, 00:18:19.595 "nvme_io_md": false, 00:18:19.595 "write_zeroes": true, 00:18:19.595 "zcopy": true, 00:18:19.595 "get_zone_info": false, 00:18:19.595 "zone_management": false, 00:18:19.595 "zone_append": false, 00:18:19.595 "compare": false, 00:18:19.595 "compare_and_write": false, 00:18:19.595 "abort": true, 00:18:19.595 "seek_hole": false, 00:18:19.595 "seek_data": false, 00:18:19.595 "copy": true, 00:18:19.595 "nvme_iov_md": false 00:18:19.595 }, 00:18:19.595 "memory_domains": [ 00:18:19.595 { 00:18:19.595 "dma_device_id": "system", 00:18:19.595 "dma_device_type": 1 00:18:19.595 }, 00:18:19.595 { 00:18:19.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.595 "dma_device_type": 2 00:18:19.595 } 00:18:19.595 ], 00:18:19.595 "driver_specific": {} 00:18:19.595 } 00:18:19.595 ] 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.595 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.595 "name": "Existed_Raid", 00:18:19.596 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:19.596 "strip_size_kb": 64, 00:18:19.596 "state": "configuring", 00:18:19.596 "raid_level": "raid5f", 00:18:19.596 "superblock": true, 00:18:19.596 "num_base_bdevs": 3, 00:18:19.596 "num_base_bdevs_discovered": 2, 00:18:19.596 "num_base_bdevs_operational": 3, 00:18:19.596 "base_bdevs_list": [ 00:18:19.596 { 00:18:19.596 "name": "BaseBdev1", 00:18:19.596 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:19.596 "is_configured": true, 00:18:19.596 "data_offset": 2048, 00:18:19.596 "data_size": 63488 00:18:19.596 }, 00:18:19.596 { 00:18:19.596 "name": null, 00:18:19.596 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:19.596 "is_configured": false, 00:18:19.596 "data_offset": 0, 00:18:19.596 "data_size": 63488 00:18:19.596 }, 00:18:19.596 { 00:18:19.596 "name": "BaseBdev3", 00:18:19.596 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:19.596 "is_configured": true, 00:18:19.596 "data_offset": 2048, 00:18:19.596 "data_size": 63488 00:18:19.596 } 00:18:19.596 ] 00:18:19.596 }' 00:18:19.596 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.596 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.854 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.114 [2024-09-27 22:35:15.736235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.114 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.115 "name": "Existed_Raid", 00:18:20.115 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:20.115 "strip_size_kb": 64, 00:18:20.115 "state": "configuring", 00:18:20.115 "raid_level": "raid5f", 00:18:20.115 "superblock": true, 00:18:20.115 "num_base_bdevs": 3, 00:18:20.115 "num_base_bdevs_discovered": 1, 00:18:20.115 "num_base_bdevs_operational": 3, 00:18:20.115 "base_bdevs_list": [ 00:18:20.115 { 00:18:20.115 "name": "BaseBdev1", 00:18:20.115 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:20.115 "is_configured": true, 00:18:20.115 "data_offset": 2048, 00:18:20.115 "data_size": 63488 00:18:20.115 }, 00:18:20.115 { 00:18:20.115 "name": null, 00:18:20.115 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:20.115 "is_configured": false, 00:18:20.115 "data_offset": 0, 00:18:20.115 "data_size": 63488 00:18:20.115 }, 00:18:20.115 { 00:18:20.115 "name": null, 00:18:20.115 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:20.115 "is_configured": false, 00:18:20.115 "data_offset": 0, 00:18:20.115 "data_size": 63488 00:18:20.115 } 00:18:20.115 ] 00:18:20.115 }' 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.115 22:35:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.375 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.375 [2024-09-27 22:35:16.252106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.634 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.634 "name": "Existed_Raid", 00:18:20.634 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:20.634 "strip_size_kb": 64, 00:18:20.634 "state": "configuring", 00:18:20.634 "raid_level": "raid5f", 00:18:20.634 "superblock": true, 00:18:20.634 "num_base_bdevs": 3, 00:18:20.634 "num_base_bdevs_discovered": 2, 00:18:20.634 "num_base_bdevs_operational": 3, 00:18:20.634 "base_bdevs_list": [ 00:18:20.634 { 00:18:20.634 "name": "BaseBdev1", 00:18:20.634 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:20.634 "is_configured": true, 00:18:20.635 "data_offset": 2048, 00:18:20.635 "data_size": 63488 00:18:20.635 }, 00:18:20.635 { 00:18:20.635 "name": null, 00:18:20.635 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:20.635 "is_configured": false, 00:18:20.635 "data_offset": 0, 00:18:20.635 "data_size": 63488 00:18:20.635 }, 00:18:20.635 { 00:18:20.635 "name": "BaseBdev3", 00:18:20.635 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:20.635 "is_configured": true, 00:18:20.635 "data_offset": 2048, 00:18:20.635 "data_size": 63488 00:18:20.635 } 00:18:20.635 ] 00:18:20.635 }' 00:18:20.635 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.635 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.894 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.894 [2024-09-27 22:35:16.728177] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.153 "name": "Existed_Raid", 00:18:21.153 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:21.153 "strip_size_kb": 64, 00:18:21.153 "state": "configuring", 00:18:21.153 "raid_level": "raid5f", 00:18:21.153 "superblock": true, 00:18:21.153 "num_base_bdevs": 3, 00:18:21.153 "num_base_bdevs_discovered": 1, 00:18:21.153 "num_base_bdevs_operational": 3, 00:18:21.153 "base_bdevs_list": [ 00:18:21.153 { 00:18:21.153 "name": null, 00:18:21.153 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:21.153 "is_configured": false, 00:18:21.153 "data_offset": 0, 00:18:21.153 "data_size": 63488 00:18:21.153 }, 00:18:21.153 { 00:18:21.153 "name": null, 00:18:21.153 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:21.153 "is_configured": false, 00:18:21.153 "data_offset": 0, 00:18:21.153 "data_size": 63488 00:18:21.153 }, 00:18:21.153 { 00:18:21.153 "name": "BaseBdev3", 00:18:21.153 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:21.153 "is_configured": true, 00:18:21.153 "data_offset": 2048, 00:18:21.153 "data_size": 63488 00:18:21.153 } 00:18:21.153 ] 00:18:21.153 }' 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.153 22:35:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.412 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.412 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.412 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.412 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.672 [2024-09-27 22:35:17.334165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.672 "name": "Existed_Raid", 00:18:21.672 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:21.672 "strip_size_kb": 64, 00:18:21.672 "state": "configuring", 00:18:21.672 "raid_level": "raid5f", 00:18:21.672 "superblock": true, 00:18:21.672 "num_base_bdevs": 3, 00:18:21.672 "num_base_bdevs_discovered": 2, 00:18:21.672 "num_base_bdevs_operational": 3, 00:18:21.672 "base_bdevs_list": [ 00:18:21.672 { 00:18:21.672 "name": null, 00:18:21.672 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:21.672 "is_configured": false, 00:18:21.672 "data_offset": 0, 00:18:21.672 "data_size": 63488 00:18:21.672 }, 00:18:21.672 { 00:18:21.672 "name": "BaseBdev2", 00:18:21.672 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:21.672 "is_configured": true, 00:18:21.672 "data_offset": 2048, 00:18:21.672 "data_size": 63488 00:18:21.672 }, 00:18:21.672 { 00:18:21.672 "name": "BaseBdev3", 00:18:21.672 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:21.672 "is_configured": true, 00:18:21.672 "data_offset": 2048, 00:18:21.672 "data_size": 63488 00:18:21.672 } 00:18:21.672 ] 00:18:21.672 }' 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.672 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.931 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.931 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:21.931 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.931 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6b98e34-cac3-4363-9c4f-5bae77c858a6 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.190 [2024-09-27 22:35:17.931363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:22.190 [2024-09-27 22:35:17.931613] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:22.190 [2024-09-27 22:35:17.931633] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:22.190 [2024-09-27 22:35:17.931955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:22.190 NewBaseBdev 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.190 [2024-09-27 22:35:17.938834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:22.190 [2024-09-27 22:35:17.938865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:22.190 [2024-09-27 22:35:17.939099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.190 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.191 [ 00:18:22.191 { 00:18:22.191 "name": "NewBaseBdev", 00:18:22.191 "aliases": [ 00:18:22.191 "c6b98e34-cac3-4363-9c4f-5bae77c858a6" 00:18:22.191 ], 00:18:22.191 "product_name": "Malloc disk", 00:18:22.191 "block_size": 512, 00:18:22.191 "num_blocks": 65536, 00:18:22.191 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:22.191 "assigned_rate_limits": { 00:18:22.191 "rw_ios_per_sec": 0, 00:18:22.191 "rw_mbytes_per_sec": 0, 00:18:22.191 "r_mbytes_per_sec": 0, 00:18:22.191 "w_mbytes_per_sec": 0 00:18:22.191 }, 00:18:22.191 "claimed": true, 00:18:22.191 "claim_type": "exclusive_write", 00:18:22.191 "zoned": false, 00:18:22.191 "supported_io_types": { 00:18:22.191 "read": true, 00:18:22.191 "write": true, 00:18:22.191 "unmap": true, 00:18:22.191 "flush": true, 00:18:22.191 "reset": true, 00:18:22.191 "nvme_admin": false, 00:18:22.191 "nvme_io": false, 00:18:22.191 "nvme_io_md": false, 00:18:22.191 "write_zeroes": true, 00:18:22.191 "zcopy": true, 00:18:22.191 "get_zone_info": false, 00:18:22.191 "zone_management": false, 00:18:22.191 "zone_append": false, 00:18:22.191 "compare": false, 00:18:22.191 "compare_and_write": false, 00:18:22.191 "abort": true, 00:18:22.191 "seek_hole": false, 00:18:22.191 "seek_data": false, 00:18:22.191 "copy": true, 00:18:22.191 "nvme_iov_md": false 00:18:22.191 }, 00:18:22.191 "memory_domains": [ 00:18:22.191 { 00:18:22.191 "dma_device_id": "system", 00:18:22.191 "dma_device_type": 1 00:18:22.191 }, 00:18:22.191 { 00:18:22.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.191 "dma_device_type": 2 00:18:22.191 } 00:18:22.191 ], 00:18:22.191 "driver_specific": {} 00:18:22.191 } 00:18:22.191 ] 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.191 22:35:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.191 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.191 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.191 "name": "Existed_Raid", 00:18:22.191 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:22.191 "strip_size_kb": 64, 00:18:22.191 "state": "online", 00:18:22.191 "raid_level": "raid5f", 00:18:22.191 "superblock": true, 00:18:22.191 "num_base_bdevs": 3, 00:18:22.191 "num_base_bdevs_discovered": 3, 00:18:22.191 "num_base_bdevs_operational": 3, 00:18:22.191 "base_bdevs_list": [ 00:18:22.191 { 00:18:22.191 "name": "NewBaseBdev", 00:18:22.191 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:22.191 "is_configured": true, 00:18:22.191 "data_offset": 2048, 00:18:22.191 "data_size": 63488 00:18:22.191 }, 00:18:22.191 { 00:18:22.191 "name": "BaseBdev2", 00:18:22.191 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:22.191 "is_configured": true, 00:18:22.191 "data_offset": 2048, 00:18:22.191 "data_size": 63488 00:18:22.191 }, 00:18:22.191 { 00:18:22.191 "name": "BaseBdev3", 00:18:22.191 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:22.191 "is_configured": true, 00:18:22.191 "data_offset": 2048, 00:18:22.191 "data_size": 63488 00:18:22.191 } 00:18:22.191 ] 00:18:22.191 }' 00:18:22.191 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.191 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.758 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.758 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:22.758 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.759 [2024-09-27 22:35:18.445268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.759 "name": "Existed_Raid", 00:18:22.759 "aliases": [ 00:18:22.759 "6deb723c-47f8-48aa-a474-484d74a50613" 00:18:22.759 ], 00:18:22.759 "product_name": "Raid Volume", 00:18:22.759 "block_size": 512, 00:18:22.759 "num_blocks": 126976, 00:18:22.759 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:22.759 "assigned_rate_limits": { 00:18:22.759 "rw_ios_per_sec": 0, 00:18:22.759 "rw_mbytes_per_sec": 0, 00:18:22.759 "r_mbytes_per_sec": 0, 00:18:22.759 "w_mbytes_per_sec": 0 00:18:22.759 }, 00:18:22.759 "claimed": false, 00:18:22.759 "zoned": false, 00:18:22.759 "supported_io_types": { 00:18:22.759 "read": true, 00:18:22.759 "write": true, 00:18:22.759 "unmap": false, 00:18:22.759 "flush": false, 00:18:22.759 "reset": true, 00:18:22.759 "nvme_admin": false, 00:18:22.759 "nvme_io": false, 00:18:22.759 "nvme_io_md": false, 00:18:22.759 "write_zeroes": true, 00:18:22.759 "zcopy": false, 00:18:22.759 "get_zone_info": false, 00:18:22.759 "zone_management": false, 00:18:22.759 "zone_append": false, 00:18:22.759 "compare": false, 00:18:22.759 "compare_and_write": false, 00:18:22.759 "abort": false, 00:18:22.759 "seek_hole": false, 00:18:22.759 "seek_data": false, 00:18:22.759 "copy": false, 00:18:22.759 "nvme_iov_md": false 00:18:22.759 }, 00:18:22.759 "driver_specific": { 00:18:22.759 "raid": { 00:18:22.759 "uuid": "6deb723c-47f8-48aa-a474-484d74a50613", 00:18:22.759 "strip_size_kb": 64, 00:18:22.759 "state": "online", 00:18:22.759 "raid_level": "raid5f", 00:18:22.759 "superblock": true, 00:18:22.759 "num_base_bdevs": 3, 00:18:22.759 "num_base_bdevs_discovered": 3, 00:18:22.759 "num_base_bdevs_operational": 3, 00:18:22.759 "base_bdevs_list": [ 00:18:22.759 { 00:18:22.759 "name": "NewBaseBdev", 00:18:22.759 "uuid": "c6b98e34-cac3-4363-9c4f-5bae77c858a6", 00:18:22.759 "is_configured": true, 00:18:22.759 "data_offset": 2048, 00:18:22.759 "data_size": 63488 00:18:22.759 }, 00:18:22.759 { 00:18:22.759 "name": "BaseBdev2", 00:18:22.759 "uuid": "a97bdc12-d676-4c94-832b-83dc1c2340d4", 00:18:22.759 "is_configured": true, 00:18:22.759 "data_offset": 2048, 00:18:22.759 "data_size": 63488 00:18:22.759 }, 00:18:22.759 { 00:18:22.759 "name": "BaseBdev3", 00:18:22.759 "uuid": "6202d9ab-383e-4a90-ad0c-d7bdd2e6b49b", 00:18:22.759 "is_configured": true, 00:18:22.759 "data_offset": 2048, 00:18:22.759 "data_size": 63488 00:18:22.759 } 00:18:22.759 ] 00:18:22.759 } 00:18:22.759 } 00:18:22.759 }' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:22.759 BaseBdev2 00:18:22.759 BaseBdev3' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.759 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.017 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.017 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.018 [2024-09-27 22:35:18.728569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.018 [2024-09-27 22:35:18.728786] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.018 [2024-09-27 22:35:18.728915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.018 [2024-09-27 22:35:18.729271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.018 [2024-09-27 22:35:18.729292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81562 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81562 ']' 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81562 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81562 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81562' 00:18:23.018 killing process with pid 81562 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81562 00:18:23.018 [2024-09-27 22:35:18.785745] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.018 22:35:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81562 00:18:23.277 [2024-09-27 22:35:19.114006] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.810 22:35:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:25.810 ************************************ 00:18:25.810 END TEST raid5f_state_function_test_sb 00:18:25.810 ************************************ 00:18:25.810 00:18:25.810 real 0m12.192s 00:18:25.810 user 0m18.287s 00:18:25.810 sys 0m2.413s 00:18:25.810 22:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:25.810 22:35:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.810 22:35:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:25.810 22:35:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:25.810 22:35:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:25.810 22:35:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.810 ************************************ 00:18:25.810 START TEST raid5f_superblock_test 00:18:25.810 ************************************ 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82210 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82210 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 82210 ']' 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.810 22:35:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.810 [2024-09-27 22:35:21.424208] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:18:25.810 [2024-09-27 22:35:21.424350] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82210 ] 00:18:25.810 [2024-09-27 22:35:21.596563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.069 [2024-09-27 22:35:21.826617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.327 [2024-09-27 22:35:22.062936] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.327 [2024-09-27 22:35:22.062995] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.895 malloc1 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.895 [2024-09-27 22:35:22.580303] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:26.895 [2024-09-27 22:35:22.580397] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.895 [2024-09-27 22:35:22.580433] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.895 [2024-09-27 22:35:22.580454] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.895 [2024-09-27 22:35:22.583642] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.895 [2024-09-27 22:35:22.583697] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:26.895 pt1 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.895 malloc2 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.895 [2024-09-27 22:35:22.639018] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.895 [2024-09-27 22:35:22.639090] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.895 [2024-09-27 22:35:22.639131] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.895 [2024-09-27 22:35:22.639147] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.895 [2024-09-27 22:35:22.642191] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.895 [2024-09-27 22:35:22.642245] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.895 pt2 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:26.895 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.896 malloc3 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.896 [2024-09-27 22:35:22.696719] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:26.896 [2024-09-27 22:35:22.696781] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.896 [2024-09-27 22:35:22.696808] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:26.896 [2024-09-27 22:35:22.696821] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.896 [2024-09-27 22:35:22.699254] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.896 [2024-09-27 22:35:22.699290] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:26.896 pt3 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.896 [2024-09-27 22:35:22.708795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:26.896 [2024-09-27 22:35:22.710998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.896 [2024-09-27 22:35:22.711068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:26.896 [2024-09-27 22:35:22.711247] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.896 [2024-09-27 22:35:22.711263] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:26.896 [2024-09-27 22:35:22.711525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:26.896 [2024-09-27 22:35:22.718619] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.896 [2024-09-27 22:35:22.718643] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:26.896 [2024-09-27 22:35:22.718854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.896 "name": "raid_bdev1", 00:18:26.896 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:26.896 "strip_size_kb": 64, 00:18:26.896 "state": "online", 00:18:26.896 "raid_level": "raid5f", 00:18:26.896 "superblock": true, 00:18:26.896 "num_base_bdevs": 3, 00:18:26.896 "num_base_bdevs_discovered": 3, 00:18:26.896 "num_base_bdevs_operational": 3, 00:18:26.896 "base_bdevs_list": [ 00:18:26.896 { 00:18:26.896 "name": "pt1", 00:18:26.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.896 "is_configured": true, 00:18:26.896 "data_offset": 2048, 00:18:26.896 "data_size": 63488 00:18:26.896 }, 00:18:26.896 { 00:18:26.896 "name": "pt2", 00:18:26.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.896 "is_configured": true, 00:18:26.896 "data_offset": 2048, 00:18:26.896 "data_size": 63488 00:18:26.896 }, 00:18:26.896 { 00:18:26.896 "name": "pt3", 00:18:26.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:26.896 "is_configured": true, 00:18:26.896 "data_offset": 2048, 00:18:26.896 "data_size": 63488 00:18:26.896 } 00:18:26.896 ] 00:18:26.896 }' 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.896 22:35:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.463 [2024-09-27 22:35:23.152391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.463 "name": "raid_bdev1", 00:18:27.463 "aliases": [ 00:18:27.463 "86487e13-147d-4ae4-9c64-689c5e8cfc86" 00:18:27.463 ], 00:18:27.463 "product_name": "Raid Volume", 00:18:27.463 "block_size": 512, 00:18:27.463 "num_blocks": 126976, 00:18:27.463 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:27.463 "assigned_rate_limits": { 00:18:27.463 "rw_ios_per_sec": 0, 00:18:27.463 "rw_mbytes_per_sec": 0, 00:18:27.463 "r_mbytes_per_sec": 0, 00:18:27.463 "w_mbytes_per_sec": 0 00:18:27.463 }, 00:18:27.463 "claimed": false, 00:18:27.463 "zoned": false, 00:18:27.463 "supported_io_types": { 00:18:27.463 "read": true, 00:18:27.463 "write": true, 00:18:27.463 "unmap": false, 00:18:27.463 "flush": false, 00:18:27.463 "reset": true, 00:18:27.463 "nvme_admin": false, 00:18:27.463 "nvme_io": false, 00:18:27.463 "nvme_io_md": false, 00:18:27.463 "write_zeroes": true, 00:18:27.463 "zcopy": false, 00:18:27.463 "get_zone_info": false, 00:18:27.463 "zone_management": false, 00:18:27.463 "zone_append": false, 00:18:27.463 "compare": false, 00:18:27.463 "compare_and_write": false, 00:18:27.463 "abort": false, 00:18:27.463 "seek_hole": false, 00:18:27.463 "seek_data": false, 00:18:27.463 "copy": false, 00:18:27.463 "nvme_iov_md": false 00:18:27.463 }, 00:18:27.463 "driver_specific": { 00:18:27.463 "raid": { 00:18:27.463 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:27.463 "strip_size_kb": 64, 00:18:27.463 "state": "online", 00:18:27.463 "raid_level": "raid5f", 00:18:27.463 "superblock": true, 00:18:27.463 "num_base_bdevs": 3, 00:18:27.463 "num_base_bdevs_discovered": 3, 00:18:27.463 "num_base_bdevs_operational": 3, 00:18:27.463 "base_bdevs_list": [ 00:18:27.463 { 00:18:27.463 "name": "pt1", 00:18:27.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.463 "is_configured": true, 00:18:27.463 "data_offset": 2048, 00:18:27.463 "data_size": 63488 00:18:27.463 }, 00:18:27.463 { 00:18:27.463 "name": "pt2", 00:18:27.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.463 "is_configured": true, 00:18:27.463 "data_offset": 2048, 00:18:27.463 "data_size": 63488 00:18:27.463 }, 00:18:27.463 { 00:18:27.463 "name": "pt3", 00:18:27.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.463 "is_configured": true, 00:18:27.463 "data_offset": 2048, 00:18:27.463 "data_size": 63488 00:18:27.463 } 00:18:27.463 ] 00:18:27.463 } 00:18:27.463 } 00:18:27.463 }' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:27.463 pt2 00:18:27.463 pt3' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.463 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.464 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.464 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.464 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:27.464 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.464 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 [2024-09-27 22:35:23.408261] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=86487e13-147d-4ae4-9c64-689c5e8cfc86 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 86487e13-147d-4ae4-9c64-689c5e8cfc86 ']' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 [2024-09-27 22:35:23.444052] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.723 [2024-09-27 22:35:23.444086] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.723 [2024-09-27 22:35:23.444180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.723 [2024-09-27 22:35:23.444254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.723 [2024-09-27 22:35:23.444266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.723 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.723 [2024-09-27 22:35:23.580130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:27.723 [2024-09-27 22:35:23.582353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:27.723 [2024-09-27 22:35:23.582411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:27.723 [2024-09-27 22:35:23.582464] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:27.723 [2024-09-27 22:35:23.582518] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:27.723 [2024-09-27 22:35:23.582540] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:27.723 [2024-09-27 22:35:23.582561] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.723 [2024-09-27 22:35:23.582574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:27.724 request: 00:18:27.724 { 00:18:27.724 "name": "raid_bdev1", 00:18:27.724 "raid_level": "raid5f", 00:18:27.724 "base_bdevs": [ 00:18:27.724 "malloc1", 00:18:27.724 "malloc2", 00:18:27.724 "malloc3" 00:18:27.724 ], 00:18:27.724 "strip_size_kb": 64, 00:18:27.724 "superblock": false, 00:18:27.724 "method": "bdev_raid_create", 00:18:27.724 "req_id": 1 00:18:27.724 } 00:18:27.724 Got JSON-RPC error response 00:18:27.724 response: 00:18:27.724 { 00:18:27.724 "code": -17, 00:18:27.724 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:27.724 } 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.724 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.983 [2024-09-27 22:35:23.652085] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:27.983 [2024-09-27 22:35:23.652167] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.983 [2024-09-27 22:35:23.652193] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:27.983 [2024-09-27 22:35:23.652205] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.983 [2024-09-27 22:35:23.654708] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.983 [2024-09-27 22:35:23.654749] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:27.983 [2024-09-27 22:35:23.654841] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:27.983 [2024-09-27 22:35:23.654905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.983 pt1 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.983 "name": "raid_bdev1", 00:18:27.983 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:27.983 "strip_size_kb": 64, 00:18:27.983 "state": "configuring", 00:18:27.983 "raid_level": "raid5f", 00:18:27.983 "superblock": true, 00:18:27.983 "num_base_bdevs": 3, 00:18:27.983 "num_base_bdevs_discovered": 1, 00:18:27.983 "num_base_bdevs_operational": 3, 00:18:27.983 "base_bdevs_list": [ 00:18:27.983 { 00:18:27.983 "name": "pt1", 00:18:27.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.983 "is_configured": true, 00:18:27.983 "data_offset": 2048, 00:18:27.983 "data_size": 63488 00:18:27.983 }, 00:18:27.983 { 00:18:27.983 "name": null, 00:18:27.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.983 "is_configured": false, 00:18:27.983 "data_offset": 2048, 00:18:27.983 "data_size": 63488 00:18:27.983 }, 00:18:27.983 { 00:18:27.983 "name": null, 00:18:27.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.983 "is_configured": false, 00:18:27.983 "data_offset": 2048, 00:18:27.983 "data_size": 63488 00:18:27.983 } 00:18:27.983 ] 00:18:27.983 }' 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.983 22:35:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.243 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:28.243 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.243 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.243 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.243 [2024-09-27 22:35:24.100113] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.243 [2024-09-27 22:35:24.100191] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.243 [2024-09-27 22:35:24.100218] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:28.244 [2024-09-27 22:35:24.100231] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.244 [2024-09-27 22:35:24.100712] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.244 [2024-09-27 22:35:24.100741] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.244 [2024-09-27 22:35:24.100833] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:28.244 [2024-09-27 22:35:24.100856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.244 pt2 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.244 [2024-09-27 22:35:24.112098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.244 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.514 "name": "raid_bdev1", 00:18:28.514 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:28.514 "strip_size_kb": 64, 00:18:28.514 "state": "configuring", 00:18:28.514 "raid_level": "raid5f", 00:18:28.514 "superblock": true, 00:18:28.514 "num_base_bdevs": 3, 00:18:28.514 "num_base_bdevs_discovered": 1, 00:18:28.514 "num_base_bdevs_operational": 3, 00:18:28.514 "base_bdevs_list": [ 00:18:28.514 { 00:18:28.514 "name": "pt1", 00:18:28.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.514 "is_configured": true, 00:18:28.514 "data_offset": 2048, 00:18:28.514 "data_size": 63488 00:18:28.514 }, 00:18:28.514 { 00:18:28.514 "name": null, 00:18:28.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.514 "is_configured": false, 00:18:28.514 "data_offset": 0, 00:18:28.514 "data_size": 63488 00:18:28.514 }, 00:18:28.514 { 00:18:28.514 "name": null, 00:18:28.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.514 "is_configured": false, 00:18:28.514 "data_offset": 2048, 00:18:28.514 "data_size": 63488 00:18:28.514 } 00:18:28.514 ] 00:18:28.514 }' 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.514 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.773 [2024-09-27 22:35:24.552072] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.773 [2024-09-27 22:35:24.552155] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.773 [2024-09-27 22:35:24.552178] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:28.773 [2024-09-27 22:35:24.552209] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.773 [2024-09-27 22:35:24.552677] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.773 [2024-09-27 22:35:24.552715] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.773 [2024-09-27 22:35:24.552800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:28.773 [2024-09-27 22:35:24.552832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.773 pt2 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.773 [2024-09-27 22:35:24.560112] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:28.773 [2024-09-27 22:35:24.560164] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.773 [2024-09-27 22:35:24.560182] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:28.773 [2024-09-27 22:35:24.560202] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.773 [2024-09-27 22:35:24.560604] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.773 [2024-09-27 22:35:24.560644] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:28.773 [2024-09-27 22:35:24.560708] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:28.773 [2024-09-27 22:35:24.560738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:28.773 [2024-09-27 22:35:24.560870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:28.773 [2024-09-27 22:35:24.560894] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:28.773 [2024-09-27 22:35:24.561164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:28.773 [2024-09-27 22:35:24.567165] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:28.773 [2024-09-27 22:35:24.567191] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:28.773 pt3 00:18:28.773 [2024-09-27 22:35:24.567361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.773 "name": "raid_bdev1", 00:18:28.773 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:28.773 "strip_size_kb": 64, 00:18:28.773 "state": "online", 00:18:28.773 "raid_level": "raid5f", 00:18:28.773 "superblock": true, 00:18:28.773 "num_base_bdevs": 3, 00:18:28.773 "num_base_bdevs_discovered": 3, 00:18:28.773 "num_base_bdevs_operational": 3, 00:18:28.773 "base_bdevs_list": [ 00:18:28.773 { 00:18:28.773 "name": "pt1", 00:18:28.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.773 "is_configured": true, 00:18:28.773 "data_offset": 2048, 00:18:28.773 "data_size": 63488 00:18:28.773 }, 00:18:28.773 { 00:18:28.773 "name": "pt2", 00:18:28.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.773 "is_configured": true, 00:18:28.773 "data_offset": 2048, 00:18:28.773 "data_size": 63488 00:18:28.773 }, 00:18:28.773 { 00:18:28.773 "name": "pt3", 00:18:28.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.773 "is_configured": true, 00:18:28.773 "data_offset": 2048, 00:18:28.773 "data_size": 63488 00:18:28.773 } 00:18:28.773 ] 00:18:28.773 }' 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.773 22:35:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:29.346 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:29.346 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.346 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.346 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.346 22:35:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 [2024-09-27 22:35:25.009028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.346 "name": "raid_bdev1", 00:18:29.346 "aliases": [ 00:18:29.346 "86487e13-147d-4ae4-9c64-689c5e8cfc86" 00:18:29.346 ], 00:18:29.346 "product_name": "Raid Volume", 00:18:29.346 "block_size": 512, 00:18:29.346 "num_blocks": 126976, 00:18:29.346 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:29.346 "assigned_rate_limits": { 00:18:29.346 "rw_ios_per_sec": 0, 00:18:29.346 "rw_mbytes_per_sec": 0, 00:18:29.346 "r_mbytes_per_sec": 0, 00:18:29.346 "w_mbytes_per_sec": 0 00:18:29.346 }, 00:18:29.346 "claimed": false, 00:18:29.346 "zoned": false, 00:18:29.346 "supported_io_types": { 00:18:29.346 "read": true, 00:18:29.346 "write": true, 00:18:29.346 "unmap": false, 00:18:29.346 "flush": false, 00:18:29.346 "reset": true, 00:18:29.346 "nvme_admin": false, 00:18:29.346 "nvme_io": false, 00:18:29.346 "nvme_io_md": false, 00:18:29.346 "write_zeroes": true, 00:18:29.346 "zcopy": false, 00:18:29.346 "get_zone_info": false, 00:18:29.346 "zone_management": false, 00:18:29.346 "zone_append": false, 00:18:29.346 "compare": false, 00:18:29.346 "compare_and_write": false, 00:18:29.346 "abort": false, 00:18:29.346 "seek_hole": false, 00:18:29.346 "seek_data": false, 00:18:29.346 "copy": false, 00:18:29.346 "nvme_iov_md": false 00:18:29.346 }, 00:18:29.346 "driver_specific": { 00:18:29.346 "raid": { 00:18:29.346 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:29.346 "strip_size_kb": 64, 00:18:29.346 "state": "online", 00:18:29.346 "raid_level": "raid5f", 00:18:29.346 "superblock": true, 00:18:29.346 "num_base_bdevs": 3, 00:18:29.346 "num_base_bdevs_discovered": 3, 00:18:29.346 "num_base_bdevs_operational": 3, 00:18:29.346 "base_bdevs_list": [ 00:18:29.346 { 00:18:29.346 "name": "pt1", 00:18:29.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.346 "is_configured": true, 00:18:29.346 "data_offset": 2048, 00:18:29.346 "data_size": 63488 00:18:29.346 }, 00:18:29.346 { 00:18:29.346 "name": "pt2", 00:18:29.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.346 "is_configured": true, 00:18:29.346 "data_offset": 2048, 00:18:29.346 "data_size": 63488 00:18:29.346 }, 00:18:29.346 { 00:18:29.346 "name": "pt3", 00:18:29.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.346 "is_configured": true, 00:18:29.346 "data_offset": 2048, 00:18:29.346 "data_size": 63488 00:18:29.346 } 00:18:29.346 ] 00:18:29.346 } 00:18:29.346 } 00:18:29.346 }' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:29.346 pt2 00:18:29.346 pt3' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.346 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:29.605 [2024-09-27 22:35:25.228647] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 86487e13-147d-4ae4-9c64-689c5e8cfc86 '!=' 86487e13-147d-4ae4-9c64-689c5e8cfc86 ']' 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.605 [2024-09-27 22:35:25.276430] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.605 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.605 "name": "raid_bdev1", 00:18:29.605 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:29.605 "strip_size_kb": 64, 00:18:29.605 "state": "online", 00:18:29.605 "raid_level": "raid5f", 00:18:29.606 "superblock": true, 00:18:29.606 "num_base_bdevs": 3, 00:18:29.606 "num_base_bdevs_discovered": 2, 00:18:29.606 "num_base_bdevs_operational": 2, 00:18:29.606 "base_bdevs_list": [ 00:18:29.606 { 00:18:29.606 "name": null, 00:18:29.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.606 "is_configured": false, 00:18:29.606 "data_offset": 0, 00:18:29.606 "data_size": 63488 00:18:29.606 }, 00:18:29.606 { 00:18:29.606 "name": "pt2", 00:18:29.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.606 "is_configured": true, 00:18:29.606 "data_offset": 2048, 00:18:29.606 "data_size": 63488 00:18:29.606 }, 00:18:29.606 { 00:18:29.606 "name": "pt3", 00:18:29.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.606 "is_configured": true, 00:18:29.606 "data_offset": 2048, 00:18:29.606 "data_size": 63488 00:18:29.606 } 00:18:29.606 ] 00:18:29.606 }' 00:18:29.606 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.606 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.867 [2024-09-27 22:35:25.728099] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.867 [2024-09-27 22:35:25.728152] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.867 [2024-09-27 22:35:25.728250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.867 [2024-09-27 22:35:25.728310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.867 [2024-09-27 22:35:25.728329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.867 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.126 [2024-09-27 22:35:25.812068] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.126 [2024-09-27 22:35:25.812142] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.126 [2024-09-27 22:35:25.812164] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:30.126 [2024-09-27 22:35:25.812180] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.126 [2024-09-27 22:35:25.814881] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.126 [2024-09-27 22:35:25.814933] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.126 [2024-09-27 22:35:25.815042] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:30.126 [2024-09-27 22:35:25.815103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.126 pt2 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.126 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.127 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.127 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.127 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.127 "name": "raid_bdev1", 00:18:30.127 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:30.127 "strip_size_kb": 64, 00:18:30.127 "state": "configuring", 00:18:30.127 "raid_level": "raid5f", 00:18:30.127 "superblock": true, 00:18:30.127 "num_base_bdevs": 3, 00:18:30.127 "num_base_bdevs_discovered": 1, 00:18:30.127 "num_base_bdevs_operational": 2, 00:18:30.127 "base_bdevs_list": [ 00:18:30.127 { 00:18:30.127 "name": null, 00:18:30.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.127 "is_configured": false, 00:18:30.127 "data_offset": 2048, 00:18:30.127 "data_size": 63488 00:18:30.127 }, 00:18:30.127 { 00:18:30.127 "name": "pt2", 00:18:30.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.127 "is_configured": true, 00:18:30.127 "data_offset": 2048, 00:18:30.127 "data_size": 63488 00:18:30.127 }, 00:18:30.127 { 00:18:30.127 "name": null, 00:18:30.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.127 "is_configured": false, 00:18:30.127 "data_offset": 2048, 00:18:30.127 "data_size": 63488 00:18:30.127 } 00:18:30.127 ] 00:18:30.127 }' 00:18:30.127 22:35:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.127 22:35:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.385 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:30.385 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.385 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:30.385 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:30.385 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.385 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.385 [2024-09-27 22:35:26.252126] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:30.385 [2024-09-27 22:35:26.252201] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.385 [2024-09-27 22:35:26.252226] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:30.385 [2024-09-27 22:35:26.252242] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.385 [2024-09-27 22:35:26.252724] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.385 [2024-09-27 22:35:26.252758] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:30.385 [2024-09-27 22:35:26.252847] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:30.386 [2024-09-27 22:35:26.252887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:30.386 [2024-09-27 22:35:26.253021] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:30.386 [2024-09-27 22:35:26.253043] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:30.386 [2024-09-27 22:35:26.253291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:30.386 [2024-09-27 22:35:26.259352] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:30.386 [2024-09-27 22:35:26.259380] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:30.386 [2024-09-27 22:35:26.259720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.386 pt3 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.386 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.644 "name": "raid_bdev1", 00:18:30.644 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:30.644 "strip_size_kb": 64, 00:18:30.644 "state": "online", 00:18:30.644 "raid_level": "raid5f", 00:18:30.644 "superblock": true, 00:18:30.644 "num_base_bdevs": 3, 00:18:30.644 "num_base_bdevs_discovered": 2, 00:18:30.644 "num_base_bdevs_operational": 2, 00:18:30.644 "base_bdevs_list": [ 00:18:30.644 { 00:18:30.644 "name": null, 00:18:30.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.644 "is_configured": false, 00:18:30.644 "data_offset": 2048, 00:18:30.644 "data_size": 63488 00:18:30.644 }, 00:18:30.644 { 00:18:30.644 "name": "pt2", 00:18:30.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.644 "is_configured": true, 00:18:30.644 "data_offset": 2048, 00:18:30.644 "data_size": 63488 00:18:30.644 }, 00:18:30.644 { 00:18:30.644 "name": "pt3", 00:18:30.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.644 "is_configured": true, 00:18:30.644 "data_offset": 2048, 00:18:30.644 "data_size": 63488 00:18:30.644 } 00:18:30.644 ] 00:18:30.644 }' 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.644 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.903 [2024-09-27 22:35:26.686119] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.903 [2024-09-27 22:35:26.686163] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.903 [2024-09-27 22:35:26.686260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.903 [2024-09-27 22:35:26.686324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.903 [2024-09-27 22:35:26.686337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.903 [2024-09-27 22:35:26.758131] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.903 [2024-09-27 22:35:26.758207] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.903 [2024-09-27 22:35:26.758234] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:30.903 [2024-09-27 22:35:26.758248] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.903 [2024-09-27 22:35:26.761232] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.903 [2024-09-27 22:35:26.761278] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.903 [2024-09-27 22:35:26.761378] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:30.903 [2024-09-27 22:35:26.761438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.903 [2024-09-27 22:35:26.761581] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:30.903 [2024-09-27 22:35:26.761599] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.903 [2024-09-27 22:35:26.761619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:30.903 [2024-09-27 22:35:26.761693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.903 pt1 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.903 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.162 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.163 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.163 "name": "raid_bdev1", 00:18:31.163 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:31.163 "strip_size_kb": 64, 00:18:31.163 "state": "configuring", 00:18:31.163 "raid_level": "raid5f", 00:18:31.163 "superblock": true, 00:18:31.163 "num_base_bdevs": 3, 00:18:31.163 "num_base_bdevs_discovered": 1, 00:18:31.163 "num_base_bdevs_operational": 2, 00:18:31.163 "base_bdevs_list": [ 00:18:31.163 { 00:18:31.163 "name": null, 00:18:31.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.163 "is_configured": false, 00:18:31.163 "data_offset": 2048, 00:18:31.163 "data_size": 63488 00:18:31.163 }, 00:18:31.163 { 00:18:31.163 "name": "pt2", 00:18:31.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.163 "is_configured": true, 00:18:31.163 "data_offset": 2048, 00:18:31.163 "data_size": 63488 00:18:31.163 }, 00:18:31.163 { 00:18:31.163 "name": null, 00:18:31.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.163 "is_configured": false, 00:18:31.163 "data_offset": 2048, 00:18:31.163 "data_size": 63488 00:18:31.163 } 00:18:31.163 ] 00:18:31.163 }' 00:18:31.163 22:35:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.163 22:35:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.423 [2024-09-27 22:35:27.261413] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.423 [2024-09-27 22:35:27.261496] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.423 [2024-09-27 22:35:27.261526] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:31.423 [2024-09-27 22:35:27.261541] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.423 [2024-09-27 22:35:27.262099] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.423 [2024-09-27 22:35:27.262131] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.423 [2024-09-27 22:35:27.262233] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:31.423 [2024-09-27 22:35:27.262260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.423 [2024-09-27 22:35:27.262408] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:31.423 [2024-09-27 22:35:27.262428] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:31.423 [2024-09-27 22:35:27.262773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:31.423 [2024-09-27 22:35:27.270727] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:31.423 [2024-09-27 22:35:27.270765] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:31.423 [2024-09-27 22:35:27.271057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.423 pt3 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.423 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.424 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.424 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.424 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.746 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.747 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.747 "name": "raid_bdev1", 00:18:31.747 "uuid": "86487e13-147d-4ae4-9c64-689c5e8cfc86", 00:18:31.747 "strip_size_kb": 64, 00:18:31.747 "state": "online", 00:18:31.747 "raid_level": "raid5f", 00:18:31.747 "superblock": true, 00:18:31.747 "num_base_bdevs": 3, 00:18:31.747 "num_base_bdevs_discovered": 2, 00:18:31.747 "num_base_bdevs_operational": 2, 00:18:31.747 "base_bdevs_list": [ 00:18:31.747 { 00:18:31.747 "name": null, 00:18:31.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.747 "is_configured": false, 00:18:31.747 "data_offset": 2048, 00:18:31.747 "data_size": 63488 00:18:31.747 }, 00:18:31.747 { 00:18:31.747 "name": "pt2", 00:18:31.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.747 "is_configured": true, 00:18:31.747 "data_offset": 2048, 00:18:31.747 "data_size": 63488 00:18:31.747 }, 00:18:31.747 { 00:18:31.747 "name": "pt3", 00:18:31.747 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.747 "is_configured": true, 00:18:31.747 "data_offset": 2048, 00:18:31.747 "data_size": 63488 00:18:31.747 } 00:18:31.747 ] 00:18:31.747 }' 00:18:31.747 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.747 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.006 [2024-09-27 22:35:27.757562] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 86487e13-147d-4ae4-9c64-689c5e8cfc86 '!=' 86487e13-147d-4ae4-9c64-689c5e8cfc86 ']' 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82210 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 82210 ']' 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 82210 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82210 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.006 killing process with pid 82210 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82210' 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 82210 00:18:32.006 [2024-09-27 22:35:27.833627] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.006 [2024-09-27 22:35:27.833728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.006 22:35:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 82210 00:18:32.007 [2024-09-27 22:35:27.833808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.007 [2024-09-27 22:35:27.833826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:32.575 [2024-09-27 22:35:28.152857] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.478 22:35:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:34.479 00:18:34.479 real 0m8.925s 00:18:34.479 user 0m13.077s 00:18:34.479 sys 0m1.788s 00:18:34.479 22:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.479 22:35:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.479 ************************************ 00:18:34.479 END TEST raid5f_superblock_test 00:18:34.479 ************************************ 00:18:34.479 22:35:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:34.479 22:35:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:34.479 22:35:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:34.479 22:35:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.479 22:35:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.479 ************************************ 00:18:34.479 START TEST raid5f_rebuild_test 00:18:34.479 ************************************ 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82666 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82666 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 82666 ']' 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.479 22:35:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.738 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:34.738 Zero copy mechanism will not be used. 00:18:34.738 [2024-09-27 22:35:30.437851] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:18:34.738 [2024-09-27 22:35:30.438004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82666 ] 00:18:34.738 [2024-09-27 22:35:30.611251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.996 [2024-09-27 22:35:30.857512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.274 [2024-09-27 22:35:31.112468] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.274 [2024-09-27 22:35:31.112513] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.842 BaseBdev1_malloc 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.842 [2024-09-27 22:35:31.658644] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.842 [2024-09-27 22:35:31.658736] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.842 [2024-09-27 22:35:31.658761] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.842 [2024-09-27 22:35:31.658779] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.842 [2024-09-27 22:35:31.661350] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.842 [2024-09-27 22:35:31.661397] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.842 BaseBdev1 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.842 BaseBdev2_malloc 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.842 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.101 [2024-09-27 22:35:31.722950] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:36.101 [2024-09-27 22:35:31.723057] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.101 [2024-09-27 22:35:31.723086] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:36.101 [2024-09-27 22:35:31.723103] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.101 [2024-09-27 22:35:31.725649] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.101 [2024-09-27 22:35:31.725695] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:36.101 BaseBdev2 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.101 BaseBdev3_malloc 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.101 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.101 [2024-09-27 22:35:31.786987] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:36.101 [2024-09-27 22:35:31.787088] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.101 [2024-09-27 22:35:31.787112] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:36.101 [2024-09-27 22:35:31.787127] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.101 [2024-09-27 22:35:31.789703] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.102 [2024-09-27 22:35:31.789750] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:36.102 BaseBdev3 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 spare_malloc 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 spare_delay 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 [2024-09-27 22:35:31.862534] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.102 [2024-09-27 22:35:31.862611] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.102 [2024-09-27 22:35:31.862633] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:36.102 [2024-09-27 22:35:31.862648] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.102 [2024-09-27 22:35:31.865307] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.102 [2024-09-27 22:35:31.865361] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.102 spare 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 [2024-09-27 22:35:31.874624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.102 [2024-09-27 22:35:31.876909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.102 [2024-09-27 22:35:31.877001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:36.102 [2024-09-27 22:35:31.877114] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:36.102 [2024-09-27 22:35:31.877126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:36.102 [2024-09-27 22:35:31.877436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:36.102 [2024-09-27 22:35:31.884511] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:36.102 [2024-09-27 22:35:31.884544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:36.102 [2024-09-27 22:35:31.884778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.102 "name": "raid_bdev1", 00:18:36.102 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:36.102 "strip_size_kb": 64, 00:18:36.102 "state": "online", 00:18:36.102 "raid_level": "raid5f", 00:18:36.102 "superblock": false, 00:18:36.102 "num_base_bdevs": 3, 00:18:36.102 "num_base_bdevs_discovered": 3, 00:18:36.102 "num_base_bdevs_operational": 3, 00:18:36.102 "base_bdevs_list": [ 00:18:36.102 { 00:18:36.102 "name": "BaseBdev1", 00:18:36.102 "uuid": "cd9956f2-bb91-5bad-aaec-bc020963a29b", 00:18:36.102 "is_configured": true, 00:18:36.102 "data_offset": 0, 00:18:36.102 "data_size": 65536 00:18:36.102 }, 00:18:36.102 { 00:18:36.102 "name": "BaseBdev2", 00:18:36.102 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:36.102 "is_configured": true, 00:18:36.102 "data_offset": 0, 00:18:36.102 "data_size": 65536 00:18:36.102 }, 00:18:36.102 { 00:18:36.102 "name": "BaseBdev3", 00:18:36.102 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:36.102 "is_configured": true, 00:18:36.102 "data_offset": 0, 00:18:36.102 "data_size": 65536 00:18:36.102 } 00:18:36.102 ] 00:18:36.102 }' 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.102 22:35:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.670 [2024-09-27 22:35:32.314740] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.670 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:36.929 [2024-09-27 22:35:32.634207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:36.929 /dev/nbd0 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.929 1+0 records in 00:18:36.929 1+0 records out 00:18:36.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406509 s, 10.1 MB/s 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:36.929 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:36.930 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:36.930 22:35:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:37.497 512+0 records in 00:18:37.497 512+0 records out 00:18:37.497 67108864 bytes (67 MB, 64 MiB) copied, 0.402413 s, 167 MB/s 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:37.497 [2024-09-27 22:35:33.343809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.497 [2024-09-27 22:35:33.362774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.497 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.756 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.756 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.756 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.756 "name": "raid_bdev1", 00:18:37.756 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:37.756 "strip_size_kb": 64, 00:18:37.756 "state": "online", 00:18:37.756 "raid_level": "raid5f", 00:18:37.756 "superblock": false, 00:18:37.756 "num_base_bdevs": 3, 00:18:37.756 "num_base_bdevs_discovered": 2, 00:18:37.756 "num_base_bdevs_operational": 2, 00:18:37.756 "base_bdevs_list": [ 00:18:37.756 { 00:18:37.756 "name": null, 00:18:37.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.756 "is_configured": false, 00:18:37.756 "data_offset": 0, 00:18:37.756 "data_size": 65536 00:18:37.756 }, 00:18:37.756 { 00:18:37.756 "name": "BaseBdev2", 00:18:37.756 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:37.756 "is_configured": true, 00:18:37.756 "data_offset": 0, 00:18:37.756 "data_size": 65536 00:18:37.756 }, 00:18:37.756 { 00:18:37.756 "name": "BaseBdev3", 00:18:37.756 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:37.756 "is_configured": true, 00:18:37.756 "data_offset": 0, 00:18:37.756 "data_size": 65536 00:18:37.756 } 00:18:37.756 ] 00:18:37.756 }' 00:18:37.756 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.756 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.015 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.015 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.015 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.015 [2024-09-27 22:35:33.778235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.015 [2024-09-27 22:35:33.799367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:38.015 22:35:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.015 22:35:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:38.015 [2024-09-27 22:35:33.809352] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.019 "name": "raid_bdev1", 00:18:39.019 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:39.019 "strip_size_kb": 64, 00:18:39.019 "state": "online", 00:18:39.019 "raid_level": "raid5f", 00:18:39.019 "superblock": false, 00:18:39.019 "num_base_bdevs": 3, 00:18:39.019 "num_base_bdevs_discovered": 3, 00:18:39.019 "num_base_bdevs_operational": 3, 00:18:39.019 "process": { 00:18:39.019 "type": "rebuild", 00:18:39.019 "target": "spare", 00:18:39.019 "progress": { 00:18:39.019 "blocks": 18432, 00:18:39.019 "percent": 14 00:18:39.019 } 00:18:39.019 }, 00:18:39.019 "base_bdevs_list": [ 00:18:39.019 { 00:18:39.019 "name": "spare", 00:18:39.019 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:39.019 "is_configured": true, 00:18:39.019 "data_offset": 0, 00:18:39.019 "data_size": 65536 00:18:39.019 }, 00:18:39.019 { 00:18:39.019 "name": "BaseBdev2", 00:18:39.019 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:39.019 "is_configured": true, 00:18:39.019 "data_offset": 0, 00:18:39.019 "data_size": 65536 00:18:39.019 }, 00:18:39.019 { 00:18:39.019 "name": "BaseBdev3", 00:18:39.019 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:39.019 "is_configured": true, 00:18:39.019 "data_offset": 0, 00:18:39.019 "data_size": 65536 00:18:39.019 } 00:18:39.019 ] 00:18:39.019 }' 00:18:39.019 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.278 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.278 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.278 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.278 22:35:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:39.278 22:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.278 22:35:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.278 [2024-09-27 22:35:34.933237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.278 [2024-09-27 22:35:35.020446] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.278 [2024-09-27 22:35:35.020535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.278 [2024-09-27 22:35:35.020578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.278 [2024-09-27 22:35:35.020590] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.278 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.279 "name": "raid_bdev1", 00:18:39.279 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:39.279 "strip_size_kb": 64, 00:18:39.279 "state": "online", 00:18:39.279 "raid_level": "raid5f", 00:18:39.279 "superblock": false, 00:18:39.279 "num_base_bdevs": 3, 00:18:39.279 "num_base_bdevs_discovered": 2, 00:18:39.279 "num_base_bdevs_operational": 2, 00:18:39.279 "base_bdevs_list": [ 00:18:39.279 { 00:18:39.279 "name": null, 00:18:39.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.279 "is_configured": false, 00:18:39.279 "data_offset": 0, 00:18:39.279 "data_size": 65536 00:18:39.279 }, 00:18:39.279 { 00:18:39.279 "name": "BaseBdev2", 00:18:39.279 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:39.279 "is_configured": true, 00:18:39.279 "data_offset": 0, 00:18:39.279 "data_size": 65536 00:18:39.279 }, 00:18:39.279 { 00:18:39.279 "name": "BaseBdev3", 00:18:39.279 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:39.279 "is_configured": true, 00:18:39.279 "data_offset": 0, 00:18:39.279 "data_size": 65536 00:18:39.279 } 00:18:39.279 ] 00:18:39.279 }' 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.279 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.848 "name": "raid_bdev1", 00:18:39.848 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:39.848 "strip_size_kb": 64, 00:18:39.848 "state": "online", 00:18:39.848 "raid_level": "raid5f", 00:18:39.848 "superblock": false, 00:18:39.848 "num_base_bdevs": 3, 00:18:39.848 "num_base_bdevs_discovered": 2, 00:18:39.848 "num_base_bdevs_operational": 2, 00:18:39.848 "base_bdevs_list": [ 00:18:39.848 { 00:18:39.848 "name": null, 00:18:39.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.848 "is_configured": false, 00:18:39.848 "data_offset": 0, 00:18:39.848 "data_size": 65536 00:18:39.848 }, 00:18:39.848 { 00:18:39.848 "name": "BaseBdev2", 00:18:39.848 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:39.848 "is_configured": true, 00:18:39.848 "data_offset": 0, 00:18:39.848 "data_size": 65536 00:18:39.848 }, 00:18:39.848 { 00:18:39.848 "name": "BaseBdev3", 00:18:39.848 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:39.848 "is_configured": true, 00:18:39.848 "data_offset": 0, 00:18:39.848 "data_size": 65536 00:18:39.848 } 00:18:39.848 ] 00:18:39.848 }' 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.848 [2024-09-27 22:35:35.618247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.848 [2024-09-27 22:35:35.638132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.848 22:35:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:39.848 [2024-09-27 22:35:35.648372] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.784 22:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.044 "name": "raid_bdev1", 00:18:41.044 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:41.044 "strip_size_kb": 64, 00:18:41.044 "state": "online", 00:18:41.044 "raid_level": "raid5f", 00:18:41.044 "superblock": false, 00:18:41.044 "num_base_bdevs": 3, 00:18:41.044 "num_base_bdevs_discovered": 3, 00:18:41.044 "num_base_bdevs_operational": 3, 00:18:41.044 "process": { 00:18:41.044 "type": "rebuild", 00:18:41.044 "target": "spare", 00:18:41.044 "progress": { 00:18:41.044 "blocks": 18432, 00:18:41.044 "percent": 14 00:18:41.044 } 00:18:41.044 }, 00:18:41.044 "base_bdevs_list": [ 00:18:41.044 { 00:18:41.044 "name": "spare", 00:18:41.044 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:41.044 "is_configured": true, 00:18:41.044 "data_offset": 0, 00:18:41.044 "data_size": 65536 00:18:41.044 }, 00:18:41.044 { 00:18:41.044 "name": "BaseBdev2", 00:18:41.044 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:41.044 "is_configured": true, 00:18:41.044 "data_offset": 0, 00:18:41.044 "data_size": 65536 00:18:41.044 }, 00:18:41.044 { 00:18:41.044 "name": "BaseBdev3", 00:18:41.044 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:41.044 "is_configured": true, 00:18:41.044 "data_offset": 0, 00:18:41.044 "data_size": 65536 00:18:41.044 } 00:18:41.044 ] 00:18:41.044 }' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=640 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.044 "name": "raid_bdev1", 00:18:41.044 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:41.044 "strip_size_kb": 64, 00:18:41.044 "state": "online", 00:18:41.044 "raid_level": "raid5f", 00:18:41.044 "superblock": false, 00:18:41.044 "num_base_bdevs": 3, 00:18:41.044 "num_base_bdevs_discovered": 3, 00:18:41.044 "num_base_bdevs_operational": 3, 00:18:41.044 "process": { 00:18:41.044 "type": "rebuild", 00:18:41.044 "target": "spare", 00:18:41.044 "progress": { 00:18:41.044 "blocks": 22528, 00:18:41.044 "percent": 17 00:18:41.044 } 00:18:41.044 }, 00:18:41.044 "base_bdevs_list": [ 00:18:41.044 { 00:18:41.044 "name": "spare", 00:18:41.044 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:41.044 "is_configured": true, 00:18:41.044 "data_offset": 0, 00:18:41.044 "data_size": 65536 00:18:41.044 }, 00:18:41.044 { 00:18:41.044 "name": "BaseBdev2", 00:18:41.044 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:41.044 "is_configured": true, 00:18:41.044 "data_offset": 0, 00:18:41.044 "data_size": 65536 00:18:41.044 }, 00:18:41.044 { 00:18:41.044 "name": "BaseBdev3", 00:18:41.044 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:41.044 "is_configured": true, 00:18:41.044 "data_offset": 0, 00:18:41.044 "data_size": 65536 00:18:41.044 } 00:18:41.044 ] 00:18:41.044 }' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.044 22:35:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.422 "name": "raid_bdev1", 00:18:42.422 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:42.422 "strip_size_kb": 64, 00:18:42.422 "state": "online", 00:18:42.422 "raid_level": "raid5f", 00:18:42.422 "superblock": false, 00:18:42.422 "num_base_bdevs": 3, 00:18:42.422 "num_base_bdevs_discovered": 3, 00:18:42.422 "num_base_bdevs_operational": 3, 00:18:42.422 "process": { 00:18:42.422 "type": "rebuild", 00:18:42.422 "target": "spare", 00:18:42.422 "progress": { 00:18:42.422 "blocks": 45056, 00:18:42.422 "percent": 34 00:18:42.422 } 00:18:42.422 }, 00:18:42.422 "base_bdevs_list": [ 00:18:42.422 { 00:18:42.422 "name": "spare", 00:18:42.422 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:42.422 "is_configured": true, 00:18:42.422 "data_offset": 0, 00:18:42.422 "data_size": 65536 00:18:42.422 }, 00:18:42.422 { 00:18:42.422 "name": "BaseBdev2", 00:18:42.422 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:42.422 "is_configured": true, 00:18:42.422 "data_offset": 0, 00:18:42.422 "data_size": 65536 00:18:42.422 }, 00:18:42.422 { 00:18:42.422 "name": "BaseBdev3", 00:18:42.422 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:42.422 "is_configured": true, 00:18:42.422 "data_offset": 0, 00:18:42.422 "data_size": 65536 00:18:42.422 } 00:18:42.422 ] 00:18:42.422 }' 00:18:42.422 22:35:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.422 22:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.422 22:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.422 22:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.422 22:35:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.359 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.359 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.359 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.359 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.360 "name": "raid_bdev1", 00:18:43.360 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:43.360 "strip_size_kb": 64, 00:18:43.360 "state": "online", 00:18:43.360 "raid_level": "raid5f", 00:18:43.360 "superblock": false, 00:18:43.360 "num_base_bdevs": 3, 00:18:43.360 "num_base_bdevs_discovered": 3, 00:18:43.360 "num_base_bdevs_operational": 3, 00:18:43.360 "process": { 00:18:43.360 "type": "rebuild", 00:18:43.360 "target": "spare", 00:18:43.360 "progress": { 00:18:43.360 "blocks": 67584, 00:18:43.360 "percent": 51 00:18:43.360 } 00:18:43.360 }, 00:18:43.360 "base_bdevs_list": [ 00:18:43.360 { 00:18:43.360 "name": "spare", 00:18:43.360 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:43.360 "is_configured": true, 00:18:43.360 "data_offset": 0, 00:18:43.360 "data_size": 65536 00:18:43.360 }, 00:18:43.360 { 00:18:43.360 "name": "BaseBdev2", 00:18:43.360 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:43.360 "is_configured": true, 00:18:43.360 "data_offset": 0, 00:18:43.360 "data_size": 65536 00:18:43.360 }, 00:18:43.360 { 00:18:43.360 "name": "BaseBdev3", 00:18:43.360 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:43.360 "is_configured": true, 00:18:43.360 "data_offset": 0, 00:18:43.360 "data_size": 65536 00:18:43.360 } 00:18:43.360 ] 00:18:43.360 }' 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.360 22:35:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.737 "name": "raid_bdev1", 00:18:44.737 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:44.737 "strip_size_kb": 64, 00:18:44.737 "state": "online", 00:18:44.737 "raid_level": "raid5f", 00:18:44.737 "superblock": false, 00:18:44.737 "num_base_bdevs": 3, 00:18:44.737 "num_base_bdevs_discovered": 3, 00:18:44.737 "num_base_bdevs_operational": 3, 00:18:44.737 "process": { 00:18:44.737 "type": "rebuild", 00:18:44.737 "target": "spare", 00:18:44.737 "progress": { 00:18:44.737 "blocks": 92160, 00:18:44.737 "percent": 70 00:18:44.737 } 00:18:44.737 }, 00:18:44.737 "base_bdevs_list": [ 00:18:44.737 { 00:18:44.737 "name": "spare", 00:18:44.737 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:44.737 "is_configured": true, 00:18:44.737 "data_offset": 0, 00:18:44.737 "data_size": 65536 00:18:44.737 }, 00:18:44.737 { 00:18:44.737 "name": "BaseBdev2", 00:18:44.737 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:44.737 "is_configured": true, 00:18:44.737 "data_offset": 0, 00:18:44.737 "data_size": 65536 00:18:44.737 }, 00:18:44.737 { 00:18:44.737 "name": "BaseBdev3", 00:18:44.737 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:44.737 "is_configured": true, 00:18:44.737 "data_offset": 0, 00:18:44.737 "data_size": 65536 00:18:44.737 } 00:18:44.737 ] 00:18:44.737 }' 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.737 22:35:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.674 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.674 "name": "raid_bdev1", 00:18:45.674 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:45.674 "strip_size_kb": 64, 00:18:45.674 "state": "online", 00:18:45.674 "raid_level": "raid5f", 00:18:45.674 "superblock": false, 00:18:45.674 "num_base_bdevs": 3, 00:18:45.674 "num_base_bdevs_discovered": 3, 00:18:45.674 "num_base_bdevs_operational": 3, 00:18:45.674 "process": { 00:18:45.674 "type": "rebuild", 00:18:45.674 "target": "spare", 00:18:45.674 "progress": { 00:18:45.674 "blocks": 114688, 00:18:45.674 "percent": 87 00:18:45.675 } 00:18:45.675 }, 00:18:45.675 "base_bdevs_list": [ 00:18:45.675 { 00:18:45.675 "name": "spare", 00:18:45.675 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:45.675 "is_configured": true, 00:18:45.675 "data_offset": 0, 00:18:45.675 "data_size": 65536 00:18:45.675 }, 00:18:45.675 { 00:18:45.675 "name": "BaseBdev2", 00:18:45.675 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:45.675 "is_configured": true, 00:18:45.675 "data_offset": 0, 00:18:45.675 "data_size": 65536 00:18:45.675 }, 00:18:45.675 { 00:18:45.675 "name": "BaseBdev3", 00:18:45.675 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:45.675 "is_configured": true, 00:18:45.675 "data_offset": 0, 00:18:45.675 "data_size": 65536 00:18:45.675 } 00:18:45.675 ] 00:18:45.675 }' 00:18:45.675 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.675 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.675 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.675 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.675 22:35:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.243 [2024-09-27 22:35:42.104867] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:46.243 [2024-09-27 22:35:42.104977] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:46.243 [2024-09-27 22:35:42.105058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.812 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.812 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.812 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.812 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.812 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.812 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.813 "name": "raid_bdev1", 00:18:46.813 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:46.813 "strip_size_kb": 64, 00:18:46.813 "state": "online", 00:18:46.813 "raid_level": "raid5f", 00:18:46.813 "superblock": false, 00:18:46.813 "num_base_bdevs": 3, 00:18:46.813 "num_base_bdevs_discovered": 3, 00:18:46.813 "num_base_bdevs_operational": 3, 00:18:46.813 "base_bdevs_list": [ 00:18:46.813 { 00:18:46.813 "name": "spare", 00:18:46.813 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:46.813 "is_configured": true, 00:18:46.813 "data_offset": 0, 00:18:46.813 "data_size": 65536 00:18:46.813 }, 00:18:46.813 { 00:18:46.813 "name": "BaseBdev2", 00:18:46.813 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:46.813 "is_configured": true, 00:18:46.813 "data_offset": 0, 00:18:46.813 "data_size": 65536 00:18:46.813 }, 00:18:46.813 { 00:18:46.813 "name": "BaseBdev3", 00:18:46.813 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:46.813 "is_configured": true, 00:18:46.813 "data_offset": 0, 00:18:46.813 "data_size": 65536 00:18:46.813 } 00:18:46.813 ] 00:18:46.813 }' 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.813 "name": "raid_bdev1", 00:18:46.813 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:46.813 "strip_size_kb": 64, 00:18:46.813 "state": "online", 00:18:46.813 "raid_level": "raid5f", 00:18:46.813 "superblock": false, 00:18:46.813 "num_base_bdevs": 3, 00:18:46.813 "num_base_bdevs_discovered": 3, 00:18:46.813 "num_base_bdevs_operational": 3, 00:18:46.813 "base_bdevs_list": [ 00:18:46.813 { 00:18:46.813 "name": "spare", 00:18:46.813 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:46.813 "is_configured": true, 00:18:46.813 "data_offset": 0, 00:18:46.813 "data_size": 65536 00:18:46.813 }, 00:18:46.813 { 00:18:46.813 "name": "BaseBdev2", 00:18:46.813 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:46.813 "is_configured": true, 00:18:46.813 "data_offset": 0, 00:18:46.813 "data_size": 65536 00:18:46.813 }, 00:18:46.813 { 00:18:46.813 "name": "BaseBdev3", 00:18:46.813 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:46.813 "is_configured": true, 00:18:46.813 "data_offset": 0, 00:18:46.813 "data_size": 65536 00:18:46.813 } 00:18:46.813 ] 00:18:46.813 }' 00:18:46.813 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.072 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.073 "name": "raid_bdev1", 00:18:47.073 "uuid": "a3fd7837-e4ac-40a1-b5f1-70738f1cc695", 00:18:47.073 "strip_size_kb": 64, 00:18:47.073 "state": "online", 00:18:47.073 "raid_level": "raid5f", 00:18:47.073 "superblock": false, 00:18:47.073 "num_base_bdevs": 3, 00:18:47.073 "num_base_bdevs_discovered": 3, 00:18:47.073 "num_base_bdevs_operational": 3, 00:18:47.073 "base_bdevs_list": [ 00:18:47.073 { 00:18:47.073 "name": "spare", 00:18:47.073 "uuid": "cdb1396f-a0ea-5299-b635-ca97f98ea24b", 00:18:47.073 "is_configured": true, 00:18:47.073 "data_offset": 0, 00:18:47.073 "data_size": 65536 00:18:47.073 }, 00:18:47.073 { 00:18:47.073 "name": "BaseBdev2", 00:18:47.073 "uuid": "67e75997-4fbf-534e-9a1b-ed5e16b1beaa", 00:18:47.073 "is_configured": true, 00:18:47.073 "data_offset": 0, 00:18:47.073 "data_size": 65536 00:18:47.073 }, 00:18:47.073 { 00:18:47.073 "name": "BaseBdev3", 00:18:47.073 "uuid": "28bf52ae-52fd-5137-953c-3e70e0d9ead5", 00:18:47.073 "is_configured": true, 00:18:47.073 "data_offset": 0, 00:18:47.073 "data_size": 65536 00:18:47.073 } 00:18:47.073 ] 00:18:47.073 }' 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.073 22:35:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.332 [2024-09-27 22:35:43.177164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.332 [2024-09-27 22:35:43.177198] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.332 [2024-09-27 22:35:43.177284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.332 [2024-09-27 22:35:43.177369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.332 [2024-09-27 22:35:43.177389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.332 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:47.591 /dev/nbd0 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:47.591 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:47.850 1+0 records in 00:18:47.850 1+0 records out 00:18:47.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685068 s, 6.0 MB/s 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.850 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:47.850 /dev/nbd1 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.136 1+0 records in 00:18:48.136 1+0 records out 00:18:48.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452278 s, 9.1 MB/s 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.136 22:35:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.397 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82666 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 82666 ']' 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 82666 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.656 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82666 00:18:48.914 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.914 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.914 killing process with pid 82666 00:18:48.914 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82666' 00:18:48.914 Received shutdown signal, test time was about 60.000000 seconds 00:18:48.914 00:18:48.914 Latency(us) 00:18:48.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.914 =================================================================================================================== 00:18:48.914 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:48.914 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 82666 00:18:48.914 [2024-09-27 22:35:44.558529] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.914 22:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 82666 00:18:49.172 [2024-09-27 22:35:44.966143] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.067 22:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:51.067 00:18:51.067 real 0m16.607s 00:18:51.067 user 0m19.801s 00:18:51.067 sys 0m2.541s 00:18:51.067 22:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.067 22:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 ************************************ 00:18:51.067 END TEST raid5f_rebuild_test 00:18:51.067 ************************************ 00:18:51.325 22:35:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:51.325 22:35:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:51.325 22:35:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.325 22:35:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.325 ************************************ 00:18:51.325 START TEST raid5f_rebuild_test_sb 00:18:51.326 ************************************ 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=83122 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 83122 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83122 ']' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.326 22:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.326 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:51.326 Zero copy mechanism will not be used. 00:18:51.326 [2024-09-27 22:35:47.137875] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:18:51.326 [2024-09-27 22:35:47.138018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83122 ] 00:18:51.584 [2024-09-27 22:35:47.308553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.843 [2024-09-27 22:35:47.540258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.101 [2024-09-27 22:35:47.786044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.101 [2024-09-27 22:35:47.786090] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.667 BaseBdev1_malloc 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.667 [2024-09-27 22:35:48.336060] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:52.667 [2024-09-27 22:35:48.336145] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.667 [2024-09-27 22:35:48.336173] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:52.667 [2024-09-27 22:35:48.336193] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.667 [2024-09-27 22:35:48.338763] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.667 [2024-09-27 22:35:48.338943] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:52.667 BaseBdev1 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.667 BaseBdev2_malloc 00:18:52.667 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 [2024-09-27 22:35:48.404270] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:52.668 [2024-09-27 22:35:48.404522] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.668 [2024-09-27 22:35:48.404674] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:52.668 [2024-09-27 22:35:48.404754] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.668 [2024-09-27 22:35:48.407452] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.668 [2024-09-27 22:35:48.407624] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:52.668 BaseBdev2 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 BaseBdev3_malloc 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 [2024-09-27 22:35:48.469104] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:52.668 [2024-09-27 22:35:48.469364] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.668 [2024-09-27 22:35:48.469407] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:52.668 [2024-09-27 22:35:48.469428] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.668 [2024-09-27 22:35:48.472347] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.668 BaseBdev3 00:18:52.668 [2024-09-27 22:35:48.472534] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 spare_malloc 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.668 spare_delay 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.668 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.927 [2024-09-27 22:35:48.546748] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.927 [2024-09-27 22:35:48.546969] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.927 [2024-09-27 22:35:48.547043] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:52.927 [2024-09-27 22:35:48.547063] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.927 [2024-09-27 22:35:48.549871] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.927 [2024-09-27 22:35:48.549942] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.927 spare 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.927 [2024-09-27 22:35:48.558925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.927 [2024-09-27 22:35:48.561408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.927 [2024-09-27 22:35:48.561667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.927 [2024-09-27 22:35:48.561947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:52.927 [2024-09-27 22:35:48.561995] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:52.927 [2024-09-27 22:35:48.562435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:52.927 [2024-09-27 22:35:48.569656] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:52.927 [2024-09-27 22:35:48.569805] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:52.927 [2024-09-27 22:35:48.570108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.927 "name": "raid_bdev1", 00:18:52.927 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:52.927 "strip_size_kb": 64, 00:18:52.927 "state": "online", 00:18:52.927 "raid_level": "raid5f", 00:18:52.927 "superblock": true, 00:18:52.927 "num_base_bdevs": 3, 00:18:52.927 "num_base_bdevs_discovered": 3, 00:18:52.927 "num_base_bdevs_operational": 3, 00:18:52.927 "base_bdevs_list": [ 00:18:52.927 { 00:18:52.927 "name": "BaseBdev1", 00:18:52.927 "uuid": "374c8f17-97d4-5665-8b52-60fbaf2a8a7c", 00:18:52.927 "is_configured": true, 00:18:52.927 "data_offset": 2048, 00:18:52.927 "data_size": 63488 00:18:52.927 }, 00:18:52.927 { 00:18:52.927 "name": "BaseBdev2", 00:18:52.927 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:52.927 "is_configured": true, 00:18:52.927 "data_offset": 2048, 00:18:52.927 "data_size": 63488 00:18:52.927 }, 00:18:52.927 { 00:18:52.927 "name": "BaseBdev3", 00:18:52.927 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:52.927 "is_configured": true, 00:18:52.927 "data_offset": 2048, 00:18:52.927 "data_size": 63488 00:18:52.927 } 00:18:52.927 ] 00:18:52.927 }' 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.927 22:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.186 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.186 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:53.186 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.186 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.186 [2024-09-27 22:35:49.036347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.445 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:53.703 [2024-09-27 22:35:49.360254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:53.703 /dev/nbd0 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:53.703 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.704 1+0 records in 00:18:53.704 1+0 records out 00:18:53.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470652 s, 8.7 MB/s 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:53.704 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:54.272 496+0 records in 00:18:54.272 496+0 records out 00:18:54.272 65011712 bytes (65 MB, 62 MiB) copied, 0.457754 s, 142 MB/s 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.272 22:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:54.272 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:54.272 [2024-09-27 22:35:50.146864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.531 [2024-09-27 22:35:50.165813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.531 "name": "raid_bdev1", 00:18:54.531 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:54.531 "strip_size_kb": 64, 00:18:54.531 "state": "online", 00:18:54.531 "raid_level": "raid5f", 00:18:54.531 "superblock": true, 00:18:54.531 "num_base_bdevs": 3, 00:18:54.531 "num_base_bdevs_discovered": 2, 00:18:54.531 "num_base_bdevs_operational": 2, 00:18:54.531 "base_bdevs_list": [ 00:18:54.531 { 00:18:54.531 "name": null, 00:18:54.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.531 "is_configured": false, 00:18:54.531 "data_offset": 0, 00:18:54.531 "data_size": 63488 00:18:54.531 }, 00:18:54.531 { 00:18:54.531 "name": "BaseBdev2", 00:18:54.531 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:54.531 "is_configured": true, 00:18:54.531 "data_offset": 2048, 00:18:54.531 "data_size": 63488 00:18:54.531 }, 00:18:54.531 { 00:18:54.531 "name": "BaseBdev3", 00:18:54.531 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:54.531 "is_configured": true, 00:18:54.531 "data_offset": 2048, 00:18:54.531 "data_size": 63488 00:18:54.531 } 00:18:54.531 ] 00:18:54.531 }' 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.531 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.788 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:54.788 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.788 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.789 [2024-09-27 22:35:50.605196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.789 [2024-09-27 22:35:50.624145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:54.789 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.789 22:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:54.789 [2024-09-27 22:35:50.632749] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.190 "name": "raid_bdev1", 00:18:56.190 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:56.190 "strip_size_kb": 64, 00:18:56.190 "state": "online", 00:18:56.190 "raid_level": "raid5f", 00:18:56.190 "superblock": true, 00:18:56.190 "num_base_bdevs": 3, 00:18:56.190 "num_base_bdevs_discovered": 3, 00:18:56.190 "num_base_bdevs_operational": 3, 00:18:56.190 "process": { 00:18:56.190 "type": "rebuild", 00:18:56.190 "target": "spare", 00:18:56.190 "progress": { 00:18:56.190 "blocks": 20480, 00:18:56.190 "percent": 16 00:18:56.190 } 00:18:56.190 }, 00:18:56.190 "base_bdevs_list": [ 00:18:56.190 { 00:18:56.190 "name": "spare", 00:18:56.190 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:18:56.190 "is_configured": true, 00:18:56.190 "data_offset": 2048, 00:18:56.190 "data_size": 63488 00:18:56.190 }, 00:18:56.190 { 00:18:56.190 "name": "BaseBdev2", 00:18:56.190 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:56.190 "is_configured": true, 00:18:56.190 "data_offset": 2048, 00:18:56.190 "data_size": 63488 00:18:56.190 }, 00:18:56.190 { 00:18:56.190 "name": "BaseBdev3", 00:18:56.190 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:56.190 "is_configured": true, 00:18:56.190 "data_offset": 2048, 00:18:56.190 "data_size": 63488 00:18:56.190 } 00:18:56.190 ] 00:18:56.190 }' 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.190 [2024-09-27 22:35:51.776627] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.190 [2024-09-27 22:35:51.842648] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.190 [2024-09-27 22:35:51.842876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.190 [2024-09-27 22:35:51.843053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.190 [2024-09-27 22:35:51.843074] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.190 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.190 "name": "raid_bdev1", 00:18:56.190 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:56.190 "strip_size_kb": 64, 00:18:56.191 "state": "online", 00:18:56.191 "raid_level": "raid5f", 00:18:56.191 "superblock": true, 00:18:56.191 "num_base_bdevs": 3, 00:18:56.191 "num_base_bdevs_discovered": 2, 00:18:56.191 "num_base_bdevs_operational": 2, 00:18:56.191 "base_bdevs_list": [ 00:18:56.191 { 00:18:56.191 "name": null, 00:18:56.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.191 "is_configured": false, 00:18:56.191 "data_offset": 0, 00:18:56.191 "data_size": 63488 00:18:56.191 }, 00:18:56.191 { 00:18:56.191 "name": "BaseBdev2", 00:18:56.191 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:56.191 "is_configured": true, 00:18:56.191 "data_offset": 2048, 00:18:56.191 "data_size": 63488 00:18:56.191 }, 00:18:56.191 { 00:18:56.191 "name": "BaseBdev3", 00:18:56.191 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:56.191 "is_configured": true, 00:18:56.191 "data_offset": 2048, 00:18:56.191 "data_size": 63488 00:18:56.191 } 00:18:56.191 ] 00:18:56.191 }' 00:18:56.191 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.191 22:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.450 "name": "raid_bdev1", 00:18:56.450 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:56.450 "strip_size_kb": 64, 00:18:56.450 "state": "online", 00:18:56.450 "raid_level": "raid5f", 00:18:56.450 "superblock": true, 00:18:56.450 "num_base_bdevs": 3, 00:18:56.450 "num_base_bdevs_discovered": 2, 00:18:56.450 "num_base_bdevs_operational": 2, 00:18:56.450 "base_bdevs_list": [ 00:18:56.450 { 00:18:56.450 "name": null, 00:18:56.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.450 "is_configured": false, 00:18:56.450 "data_offset": 0, 00:18:56.450 "data_size": 63488 00:18:56.450 }, 00:18:56.450 { 00:18:56.450 "name": "BaseBdev2", 00:18:56.450 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:56.450 "is_configured": true, 00:18:56.450 "data_offset": 2048, 00:18:56.450 "data_size": 63488 00:18:56.450 }, 00:18:56.450 { 00:18:56.450 "name": "BaseBdev3", 00:18:56.450 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:56.450 "is_configured": true, 00:18:56.450 "data_offset": 2048, 00:18:56.450 "data_size": 63488 00:18:56.450 } 00:18:56.450 ] 00:18:56.450 }' 00:18:56.450 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 [2024-09-27 22:35:52.404895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.708 [2024-09-27 22:35:52.423000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.708 22:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:56.708 [2024-09-27 22:35:52.432174] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.640 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.640 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.640 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.640 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.640 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.641 "name": "raid_bdev1", 00:18:57.641 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:57.641 "strip_size_kb": 64, 00:18:57.641 "state": "online", 00:18:57.641 "raid_level": "raid5f", 00:18:57.641 "superblock": true, 00:18:57.641 "num_base_bdevs": 3, 00:18:57.641 "num_base_bdevs_discovered": 3, 00:18:57.641 "num_base_bdevs_operational": 3, 00:18:57.641 "process": { 00:18:57.641 "type": "rebuild", 00:18:57.641 "target": "spare", 00:18:57.641 "progress": { 00:18:57.641 "blocks": 20480, 00:18:57.641 "percent": 16 00:18:57.641 } 00:18:57.641 }, 00:18:57.641 "base_bdevs_list": [ 00:18:57.641 { 00:18:57.641 "name": "spare", 00:18:57.641 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:18:57.641 "is_configured": true, 00:18:57.641 "data_offset": 2048, 00:18:57.641 "data_size": 63488 00:18:57.641 }, 00:18:57.641 { 00:18:57.641 "name": "BaseBdev2", 00:18:57.641 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:57.641 "is_configured": true, 00:18:57.641 "data_offset": 2048, 00:18:57.641 "data_size": 63488 00:18:57.641 }, 00:18:57.641 { 00:18:57.641 "name": "BaseBdev3", 00:18:57.641 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:57.641 "is_configured": true, 00:18:57.641 "data_offset": 2048, 00:18:57.641 "data_size": 63488 00:18:57.641 } 00:18:57.641 ] 00:18:57.641 }' 00:18:57.641 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:57.901 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=657 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.901 "name": "raid_bdev1", 00:18:57.901 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:57.901 "strip_size_kb": 64, 00:18:57.901 "state": "online", 00:18:57.901 "raid_level": "raid5f", 00:18:57.901 "superblock": true, 00:18:57.901 "num_base_bdevs": 3, 00:18:57.901 "num_base_bdevs_discovered": 3, 00:18:57.901 "num_base_bdevs_operational": 3, 00:18:57.901 "process": { 00:18:57.901 "type": "rebuild", 00:18:57.901 "target": "spare", 00:18:57.901 "progress": { 00:18:57.901 "blocks": 22528, 00:18:57.901 "percent": 17 00:18:57.901 } 00:18:57.901 }, 00:18:57.901 "base_bdevs_list": [ 00:18:57.901 { 00:18:57.901 "name": "spare", 00:18:57.901 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:18:57.901 "is_configured": true, 00:18:57.901 "data_offset": 2048, 00:18:57.901 "data_size": 63488 00:18:57.901 }, 00:18:57.901 { 00:18:57.901 "name": "BaseBdev2", 00:18:57.901 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:57.901 "is_configured": true, 00:18:57.901 "data_offset": 2048, 00:18:57.901 "data_size": 63488 00:18:57.901 }, 00:18:57.901 { 00:18:57.901 "name": "BaseBdev3", 00:18:57.901 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:57.901 "is_configured": true, 00:18:57.901 "data_offset": 2048, 00:18:57.901 "data_size": 63488 00:18:57.901 } 00:18:57.901 ] 00:18:57.901 }' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.901 22:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.279 "name": "raid_bdev1", 00:18:59.279 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:18:59.279 "strip_size_kb": 64, 00:18:59.279 "state": "online", 00:18:59.279 "raid_level": "raid5f", 00:18:59.279 "superblock": true, 00:18:59.279 "num_base_bdevs": 3, 00:18:59.279 "num_base_bdevs_discovered": 3, 00:18:59.279 "num_base_bdevs_operational": 3, 00:18:59.279 "process": { 00:18:59.279 "type": "rebuild", 00:18:59.279 "target": "spare", 00:18:59.279 "progress": { 00:18:59.279 "blocks": 45056, 00:18:59.279 "percent": 35 00:18:59.279 } 00:18:59.279 }, 00:18:59.279 "base_bdevs_list": [ 00:18:59.279 { 00:18:59.279 "name": "spare", 00:18:59.279 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:18:59.279 "is_configured": true, 00:18:59.279 "data_offset": 2048, 00:18:59.279 "data_size": 63488 00:18:59.279 }, 00:18:59.279 { 00:18:59.279 "name": "BaseBdev2", 00:18:59.279 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:18:59.279 "is_configured": true, 00:18:59.279 "data_offset": 2048, 00:18:59.279 "data_size": 63488 00:18:59.279 }, 00:18:59.279 { 00:18:59.279 "name": "BaseBdev3", 00:18:59.279 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:18:59.279 "is_configured": true, 00:18:59.279 "data_offset": 2048, 00:18:59.279 "data_size": 63488 00:18:59.279 } 00:18:59.279 ] 00:18:59.279 }' 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.279 22:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.215 "name": "raid_bdev1", 00:19:00.215 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:00.215 "strip_size_kb": 64, 00:19:00.215 "state": "online", 00:19:00.215 "raid_level": "raid5f", 00:19:00.215 "superblock": true, 00:19:00.215 "num_base_bdevs": 3, 00:19:00.215 "num_base_bdevs_discovered": 3, 00:19:00.215 "num_base_bdevs_operational": 3, 00:19:00.215 "process": { 00:19:00.215 "type": "rebuild", 00:19:00.215 "target": "spare", 00:19:00.215 "progress": { 00:19:00.215 "blocks": 67584, 00:19:00.215 "percent": 53 00:19:00.215 } 00:19:00.215 }, 00:19:00.215 "base_bdevs_list": [ 00:19:00.215 { 00:19:00.215 "name": "spare", 00:19:00.215 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:00.215 "is_configured": true, 00:19:00.215 "data_offset": 2048, 00:19:00.215 "data_size": 63488 00:19:00.215 }, 00:19:00.215 { 00:19:00.215 "name": "BaseBdev2", 00:19:00.215 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:00.215 "is_configured": true, 00:19:00.215 "data_offset": 2048, 00:19:00.215 "data_size": 63488 00:19:00.215 }, 00:19:00.215 { 00:19:00.215 "name": "BaseBdev3", 00:19:00.215 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:00.215 "is_configured": true, 00:19:00.215 "data_offset": 2048, 00:19:00.215 "data_size": 63488 00:19:00.215 } 00:19:00.215 ] 00:19:00.215 }' 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.215 22:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.151 22:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.151 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.410 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.410 "name": "raid_bdev1", 00:19:01.410 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:01.410 "strip_size_kb": 64, 00:19:01.410 "state": "online", 00:19:01.410 "raid_level": "raid5f", 00:19:01.410 "superblock": true, 00:19:01.410 "num_base_bdevs": 3, 00:19:01.410 "num_base_bdevs_discovered": 3, 00:19:01.410 "num_base_bdevs_operational": 3, 00:19:01.410 "process": { 00:19:01.410 "type": "rebuild", 00:19:01.410 "target": "spare", 00:19:01.410 "progress": { 00:19:01.410 "blocks": 92160, 00:19:01.410 "percent": 72 00:19:01.410 } 00:19:01.410 }, 00:19:01.410 "base_bdevs_list": [ 00:19:01.410 { 00:19:01.410 "name": "spare", 00:19:01.410 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:01.410 "is_configured": true, 00:19:01.410 "data_offset": 2048, 00:19:01.410 "data_size": 63488 00:19:01.410 }, 00:19:01.410 { 00:19:01.410 "name": "BaseBdev2", 00:19:01.410 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:01.410 "is_configured": true, 00:19:01.410 "data_offset": 2048, 00:19:01.410 "data_size": 63488 00:19:01.410 }, 00:19:01.410 { 00:19:01.410 "name": "BaseBdev3", 00:19:01.410 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:01.410 "is_configured": true, 00:19:01.410 "data_offset": 2048, 00:19:01.410 "data_size": 63488 00:19:01.410 } 00:19:01.410 ] 00:19:01.410 }' 00:19:01.410 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.410 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.410 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.410 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.410 22:35:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.384 "name": "raid_bdev1", 00:19:02.384 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:02.384 "strip_size_kb": 64, 00:19:02.384 "state": "online", 00:19:02.384 "raid_level": "raid5f", 00:19:02.384 "superblock": true, 00:19:02.384 "num_base_bdevs": 3, 00:19:02.384 "num_base_bdevs_discovered": 3, 00:19:02.384 "num_base_bdevs_operational": 3, 00:19:02.384 "process": { 00:19:02.384 "type": "rebuild", 00:19:02.384 "target": "spare", 00:19:02.384 "progress": { 00:19:02.384 "blocks": 114688, 00:19:02.384 "percent": 90 00:19:02.384 } 00:19:02.384 }, 00:19:02.384 "base_bdevs_list": [ 00:19:02.384 { 00:19:02.384 "name": "spare", 00:19:02.384 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:02.384 "is_configured": true, 00:19:02.384 "data_offset": 2048, 00:19:02.384 "data_size": 63488 00:19:02.384 }, 00:19:02.384 { 00:19:02.384 "name": "BaseBdev2", 00:19:02.384 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:02.384 "is_configured": true, 00:19:02.384 "data_offset": 2048, 00:19:02.384 "data_size": 63488 00:19:02.384 }, 00:19:02.384 { 00:19:02.384 "name": "BaseBdev3", 00:19:02.384 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:02.384 "is_configured": true, 00:19:02.384 "data_offset": 2048, 00:19:02.384 "data_size": 63488 00:19:02.384 } 00:19:02.384 ] 00:19:02.384 }' 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.384 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.645 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.645 22:35:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:02.906 [2024-09-27 22:35:58.684561] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:02.906 [2024-09-27 22:35:58.684659] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:02.906 [2024-09-27 22:35:58.684796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.473 "name": "raid_bdev1", 00:19:03.473 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:03.473 "strip_size_kb": 64, 00:19:03.473 "state": "online", 00:19:03.473 "raid_level": "raid5f", 00:19:03.473 "superblock": true, 00:19:03.473 "num_base_bdevs": 3, 00:19:03.473 "num_base_bdevs_discovered": 3, 00:19:03.473 "num_base_bdevs_operational": 3, 00:19:03.473 "base_bdevs_list": [ 00:19:03.473 { 00:19:03.473 "name": "spare", 00:19:03.473 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:03.473 "is_configured": true, 00:19:03.473 "data_offset": 2048, 00:19:03.473 "data_size": 63488 00:19:03.473 }, 00:19:03.473 { 00:19:03.473 "name": "BaseBdev2", 00:19:03.473 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:03.473 "is_configured": true, 00:19:03.473 "data_offset": 2048, 00:19:03.473 "data_size": 63488 00:19:03.473 }, 00:19:03.473 { 00:19:03.473 "name": "BaseBdev3", 00:19:03.473 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:03.473 "is_configured": true, 00:19:03.473 "data_offset": 2048, 00:19:03.473 "data_size": 63488 00:19:03.473 } 00:19:03.473 ] 00:19:03.473 }' 00:19:03.473 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.733 "name": "raid_bdev1", 00:19:03.733 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:03.733 "strip_size_kb": 64, 00:19:03.733 "state": "online", 00:19:03.733 "raid_level": "raid5f", 00:19:03.733 "superblock": true, 00:19:03.733 "num_base_bdevs": 3, 00:19:03.733 "num_base_bdevs_discovered": 3, 00:19:03.733 "num_base_bdevs_operational": 3, 00:19:03.733 "base_bdevs_list": [ 00:19:03.733 { 00:19:03.733 "name": "spare", 00:19:03.733 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:03.733 "is_configured": true, 00:19:03.733 "data_offset": 2048, 00:19:03.733 "data_size": 63488 00:19:03.733 }, 00:19:03.733 { 00:19:03.733 "name": "BaseBdev2", 00:19:03.733 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:03.733 "is_configured": true, 00:19:03.733 "data_offset": 2048, 00:19:03.733 "data_size": 63488 00:19:03.733 }, 00:19:03.733 { 00:19:03.733 "name": "BaseBdev3", 00:19:03.733 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:03.733 "is_configured": true, 00:19:03.733 "data_offset": 2048, 00:19:03.733 "data_size": 63488 00:19:03.733 } 00:19:03.733 ] 00:19:03.733 }' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.733 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.993 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.993 "name": "raid_bdev1", 00:19:03.993 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:03.993 "strip_size_kb": 64, 00:19:03.993 "state": "online", 00:19:03.993 "raid_level": "raid5f", 00:19:03.993 "superblock": true, 00:19:03.993 "num_base_bdevs": 3, 00:19:03.993 "num_base_bdevs_discovered": 3, 00:19:03.993 "num_base_bdevs_operational": 3, 00:19:03.993 "base_bdevs_list": [ 00:19:03.993 { 00:19:03.993 "name": "spare", 00:19:03.993 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:03.993 "is_configured": true, 00:19:03.993 "data_offset": 2048, 00:19:03.993 "data_size": 63488 00:19:03.993 }, 00:19:03.993 { 00:19:03.993 "name": "BaseBdev2", 00:19:03.993 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:03.993 "is_configured": true, 00:19:03.993 "data_offset": 2048, 00:19:03.993 "data_size": 63488 00:19:03.993 }, 00:19:03.993 { 00:19:03.993 "name": "BaseBdev3", 00:19:03.993 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:03.993 "is_configured": true, 00:19:03.993 "data_offset": 2048, 00:19:03.993 "data_size": 63488 00:19:03.993 } 00:19:03.993 ] 00:19:03.993 }' 00:19:03.993 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.993 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.253 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.253 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.253 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.253 [2024-09-27 22:35:59.996225] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.253 [2024-09-27 22:35:59.996273] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.253 [2024-09-27 22:35:59.996384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.253 [2024-09-27 22:35:59.996472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.253 [2024-09-27 22:35:59.996494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:04.253 22:35:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:04.253 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:04.512 /dev/nbd0 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.512 1+0 records in 00:19:04.512 1+0 records out 00:19:04.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070802 s, 5.8 MB/s 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:04.512 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:04.839 /dev/nbd1 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.839 1+0 records in 00:19:04.839 1+0 records out 00:19:04.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860346 s, 4.8 MB/s 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:04.839 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.097 22:36:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.357 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.616 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.616 [2024-09-27 22:36:01.316635] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.616 [2024-09-27 22:36:01.316733] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.616 [2024-09-27 22:36:01.316761] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:05.616 [2024-09-27 22:36:01.316780] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.617 [2024-09-27 22:36:01.319537] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.617 [2024-09-27 22:36:01.319587] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.617 [2024-09-27 22:36:01.319688] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:05.617 [2024-09-27 22:36:01.319740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.617 [2024-09-27 22:36:01.319884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.617 [2024-09-27 22:36:01.320015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:05.617 spare 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.617 [2024-09-27 22:36:01.419965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:05.617 [2024-09-27 22:36:01.420248] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:05.617 [2024-09-27 22:36:01.420698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:05.617 [2024-09-27 22:36:01.427761] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:05.617 [2024-09-27 22:36:01.427934] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:05.617 [2024-09-27 22:36:01.428374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.617 "name": "raid_bdev1", 00:19:05.617 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:05.617 "strip_size_kb": 64, 00:19:05.617 "state": "online", 00:19:05.617 "raid_level": "raid5f", 00:19:05.617 "superblock": true, 00:19:05.617 "num_base_bdevs": 3, 00:19:05.617 "num_base_bdevs_discovered": 3, 00:19:05.617 "num_base_bdevs_operational": 3, 00:19:05.617 "base_bdevs_list": [ 00:19:05.617 { 00:19:05.617 "name": "spare", 00:19:05.617 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:05.617 "is_configured": true, 00:19:05.617 "data_offset": 2048, 00:19:05.617 "data_size": 63488 00:19:05.617 }, 00:19:05.617 { 00:19:05.617 "name": "BaseBdev2", 00:19:05.617 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:05.617 "is_configured": true, 00:19:05.617 "data_offset": 2048, 00:19:05.617 "data_size": 63488 00:19:05.617 }, 00:19:05.617 { 00:19:05.617 "name": "BaseBdev3", 00:19:05.617 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:05.617 "is_configured": true, 00:19:05.617 "data_offset": 2048, 00:19:05.617 "data_size": 63488 00:19:05.617 } 00:19:05.617 ] 00:19:05.617 }' 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.617 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.184 "name": "raid_bdev1", 00:19:06.184 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:06.184 "strip_size_kb": 64, 00:19:06.184 "state": "online", 00:19:06.184 "raid_level": "raid5f", 00:19:06.184 "superblock": true, 00:19:06.184 "num_base_bdevs": 3, 00:19:06.184 "num_base_bdevs_discovered": 3, 00:19:06.184 "num_base_bdevs_operational": 3, 00:19:06.184 "base_bdevs_list": [ 00:19:06.184 { 00:19:06.184 "name": "spare", 00:19:06.184 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:06.184 "is_configured": true, 00:19:06.184 "data_offset": 2048, 00:19:06.184 "data_size": 63488 00:19:06.184 }, 00:19:06.184 { 00:19:06.184 "name": "BaseBdev2", 00:19:06.184 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:06.184 "is_configured": true, 00:19:06.184 "data_offset": 2048, 00:19:06.184 "data_size": 63488 00:19:06.184 }, 00:19:06.184 { 00:19:06.184 "name": "BaseBdev3", 00:19:06.184 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:06.184 "is_configured": true, 00:19:06.184 "data_offset": 2048, 00:19:06.184 "data_size": 63488 00:19:06.184 } 00:19:06.184 ] 00:19:06.184 }' 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.184 22:36:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.184 [2024-09-27 22:36:02.046231] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.184 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.443 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.443 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.443 "name": "raid_bdev1", 00:19:06.443 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:06.443 "strip_size_kb": 64, 00:19:06.443 "state": "online", 00:19:06.443 "raid_level": "raid5f", 00:19:06.443 "superblock": true, 00:19:06.443 "num_base_bdevs": 3, 00:19:06.443 "num_base_bdevs_discovered": 2, 00:19:06.443 "num_base_bdevs_operational": 2, 00:19:06.443 "base_bdevs_list": [ 00:19:06.443 { 00:19:06.443 "name": null, 00:19:06.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.443 "is_configured": false, 00:19:06.443 "data_offset": 0, 00:19:06.443 "data_size": 63488 00:19:06.443 }, 00:19:06.443 { 00:19:06.443 "name": "BaseBdev2", 00:19:06.443 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:06.443 "is_configured": true, 00:19:06.443 "data_offset": 2048, 00:19:06.443 "data_size": 63488 00:19:06.443 }, 00:19:06.443 { 00:19:06.443 "name": "BaseBdev3", 00:19:06.443 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:06.443 "is_configured": true, 00:19:06.443 "data_offset": 2048, 00:19:06.443 "data_size": 63488 00:19:06.443 } 00:19:06.443 ] 00:19:06.443 }' 00:19:06.443 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.443 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.702 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.702 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.702 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.702 [2024-09-27 22:36:02.453801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.702 [2024-09-27 22:36:02.454019] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:06.703 [2024-09-27 22:36:02.454041] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:06.703 [2024-09-27 22:36:02.454084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.703 [2024-09-27 22:36:02.472003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:06.703 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.703 22:36:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:06.703 [2024-09-27 22:36:02.480892] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.640 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.900 "name": "raid_bdev1", 00:19:07.900 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:07.900 "strip_size_kb": 64, 00:19:07.900 "state": "online", 00:19:07.900 "raid_level": "raid5f", 00:19:07.900 "superblock": true, 00:19:07.900 "num_base_bdevs": 3, 00:19:07.900 "num_base_bdevs_discovered": 3, 00:19:07.900 "num_base_bdevs_operational": 3, 00:19:07.900 "process": { 00:19:07.900 "type": "rebuild", 00:19:07.900 "target": "spare", 00:19:07.900 "progress": { 00:19:07.900 "blocks": 20480, 00:19:07.900 "percent": 16 00:19:07.900 } 00:19:07.900 }, 00:19:07.900 "base_bdevs_list": [ 00:19:07.900 { 00:19:07.900 "name": "spare", 00:19:07.900 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:07.900 "is_configured": true, 00:19:07.900 "data_offset": 2048, 00:19:07.900 "data_size": 63488 00:19:07.900 }, 00:19:07.900 { 00:19:07.900 "name": "BaseBdev2", 00:19:07.900 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:07.900 "is_configured": true, 00:19:07.900 "data_offset": 2048, 00:19:07.900 "data_size": 63488 00:19:07.900 }, 00:19:07.900 { 00:19:07.900 "name": "BaseBdev3", 00:19:07.900 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:07.900 "is_configured": true, 00:19:07.900 "data_offset": 2048, 00:19:07.900 "data_size": 63488 00:19:07.900 } 00:19:07.900 ] 00:19:07.900 }' 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.900 [2024-09-27 22:36:03.624458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.900 [2024-09-27 22:36:03.691184] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:07.900 [2024-09-27 22:36:03.691274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.900 [2024-09-27 22:36:03.691293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.900 [2024-09-27 22:36:03.691305] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.900 "name": "raid_bdev1", 00:19:07.900 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:07.900 "strip_size_kb": 64, 00:19:07.900 "state": "online", 00:19:07.900 "raid_level": "raid5f", 00:19:07.900 "superblock": true, 00:19:07.900 "num_base_bdevs": 3, 00:19:07.900 "num_base_bdevs_discovered": 2, 00:19:07.900 "num_base_bdevs_operational": 2, 00:19:07.900 "base_bdevs_list": [ 00:19:07.900 { 00:19:07.900 "name": null, 00:19:07.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.900 "is_configured": false, 00:19:07.900 "data_offset": 0, 00:19:07.900 "data_size": 63488 00:19:07.900 }, 00:19:07.900 { 00:19:07.900 "name": "BaseBdev2", 00:19:07.900 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:07.900 "is_configured": true, 00:19:07.900 "data_offset": 2048, 00:19:07.900 "data_size": 63488 00:19:07.900 }, 00:19:07.900 { 00:19:07.900 "name": "BaseBdev3", 00:19:07.900 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:07.900 "is_configured": true, 00:19:07.900 "data_offset": 2048, 00:19:07.900 "data_size": 63488 00:19:07.900 } 00:19:07.900 ] 00:19:07.900 }' 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.900 22:36:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.468 22:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:08.468 22:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.468 22:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.468 [2024-09-27 22:36:04.185287] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.468 [2024-09-27 22:36:04.185376] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.468 [2024-09-27 22:36:04.185402] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:08.468 [2024-09-27 22:36:04.185422] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.468 [2024-09-27 22:36:04.185944] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.468 [2024-09-27 22:36:04.185971] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.468 [2024-09-27 22:36:04.186097] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:08.468 [2024-09-27 22:36:04.186118] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.468 [2024-09-27 22:36:04.186130] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:08.468 [2024-09-27 22:36:04.186163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.468 [2024-09-27 22:36:04.205160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:19:08.468 spare 00:19:08.468 22:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.468 22:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:08.468 [2024-09-27 22:36:04.214514] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.406 "name": "raid_bdev1", 00:19:09.406 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:09.406 "strip_size_kb": 64, 00:19:09.406 "state": "online", 00:19:09.406 "raid_level": "raid5f", 00:19:09.406 "superblock": true, 00:19:09.406 "num_base_bdevs": 3, 00:19:09.406 "num_base_bdevs_discovered": 3, 00:19:09.406 "num_base_bdevs_operational": 3, 00:19:09.406 "process": { 00:19:09.406 "type": "rebuild", 00:19:09.406 "target": "spare", 00:19:09.406 "progress": { 00:19:09.406 "blocks": 20480, 00:19:09.406 "percent": 16 00:19:09.406 } 00:19:09.406 }, 00:19:09.406 "base_bdevs_list": [ 00:19:09.406 { 00:19:09.406 "name": "spare", 00:19:09.406 "uuid": "787ae30a-77ff-5be2-9e21-60dd6dd1cf8b", 00:19:09.406 "is_configured": true, 00:19:09.406 "data_offset": 2048, 00:19:09.406 "data_size": 63488 00:19:09.406 }, 00:19:09.406 { 00:19:09.406 "name": "BaseBdev2", 00:19:09.406 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:09.406 "is_configured": true, 00:19:09.406 "data_offset": 2048, 00:19:09.406 "data_size": 63488 00:19:09.406 }, 00:19:09.406 { 00:19:09.406 "name": "BaseBdev3", 00:19:09.406 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:09.406 "is_configured": true, 00:19:09.406 "data_offset": 2048, 00:19:09.406 "data_size": 63488 00:19:09.406 } 00:19:09.406 ] 00:19:09.406 }' 00:19:09.406 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.665 [2024-09-27 22:36:05.333861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.665 [2024-09-27 22:36:05.425766] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:09.665 [2024-09-27 22:36:05.425854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.665 [2024-09-27 22:36:05.425879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.665 [2024-09-27 22:36:05.425891] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.665 "name": "raid_bdev1", 00:19:09.665 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:09.665 "strip_size_kb": 64, 00:19:09.665 "state": "online", 00:19:09.665 "raid_level": "raid5f", 00:19:09.665 "superblock": true, 00:19:09.665 "num_base_bdevs": 3, 00:19:09.665 "num_base_bdevs_discovered": 2, 00:19:09.665 "num_base_bdevs_operational": 2, 00:19:09.665 "base_bdevs_list": [ 00:19:09.665 { 00:19:09.665 "name": null, 00:19:09.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.665 "is_configured": false, 00:19:09.665 "data_offset": 0, 00:19:09.665 "data_size": 63488 00:19:09.665 }, 00:19:09.665 { 00:19:09.665 "name": "BaseBdev2", 00:19:09.665 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:09.665 "is_configured": true, 00:19:09.665 "data_offset": 2048, 00:19:09.665 "data_size": 63488 00:19:09.665 }, 00:19:09.665 { 00:19:09.665 "name": "BaseBdev3", 00:19:09.665 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:09.665 "is_configured": true, 00:19:09.665 "data_offset": 2048, 00:19:09.665 "data_size": 63488 00:19:09.665 } 00:19:09.665 ] 00:19:09.665 }' 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.665 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.232 "name": "raid_bdev1", 00:19:10.232 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:10.232 "strip_size_kb": 64, 00:19:10.232 "state": "online", 00:19:10.232 "raid_level": "raid5f", 00:19:10.232 "superblock": true, 00:19:10.232 "num_base_bdevs": 3, 00:19:10.232 "num_base_bdevs_discovered": 2, 00:19:10.232 "num_base_bdevs_operational": 2, 00:19:10.232 "base_bdevs_list": [ 00:19:10.232 { 00:19:10.232 "name": null, 00:19:10.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.232 "is_configured": false, 00:19:10.232 "data_offset": 0, 00:19:10.232 "data_size": 63488 00:19:10.232 }, 00:19:10.232 { 00:19:10.232 "name": "BaseBdev2", 00:19:10.232 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:10.232 "is_configured": true, 00:19:10.232 "data_offset": 2048, 00:19:10.232 "data_size": 63488 00:19:10.232 }, 00:19:10.232 { 00:19:10.232 "name": "BaseBdev3", 00:19:10.232 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:10.232 "is_configured": true, 00:19:10.232 "data_offset": 2048, 00:19:10.232 "data_size": 63488 00:19:10.232 } 00:19:10.232 ] 00:19:10.232 }' 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.232 22:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.232 [2024-09-27 22:36:06.020457] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:10.232 [2024-09-27 22:36:06.020542] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.232 [2024-09-27 22:36:06.020575] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:10.232 [2024-09-27 22:36:06.020589] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.232 [2024-09-27 22:36:06.021130] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.232 [2024-09-27 22:36:06.021154] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:10.232 [2024-09-27 22:36:06.021251] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:10.232 [2024-09-27 22:36:06.021267] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.232 [2024-09-27 22:36:06.021284] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:10.232 [2024-09-27 22:36:06.021298] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:10.232 BaseBdev1 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.232 22:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.171 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.431 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.431 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.431 "name": "raid_bdev1", 00:19:11.431 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:11.431 "strip_size_kb": 64, 00:19:11.431 "state": "online", 00:19:11.431 "raid_level": "raid5f", 00:19:11.431 "superblock": true, 00:19:11.431 "num_base_bdevs": 3, 00:19:11.431 "num_base_bdevs_discovered": 2, 00:19:11.431 "num_base_bdevs_operational": 2, 00:19:11.431 "base_bdevs_list": [ 00:19:11.431 { 00:19:11.431 "name": null, 00:19:11.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.431 "is_configured": false, 00:19:11.431 "data_offset": 0, 00:19:11.431 "data_size": 63488 00:19:11.431 }, 00:19:11.431 { 00:19:11.431 "name": "BaseBdev2", 00:19:11.431 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:11.431 "is_configured": true, 00:19:11.431 "data_offset": 2048, 00:19:11.431 "data_size": 63488 00:19:11.431 }, 00:19:11.431 { 00:19:11.431 "name": "BaseBdev3", 00:19:11.431 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:11.431 "is_configured": true, 00:19:11.431 "data_offset": 2048, 00:19:11.431 "data_size": 63488 00:19:11.431 } 00:19:11.431 ] 00:19:11.431 }' 00:19:11.431 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.431 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.690 "name": "raid_bdev1", 00:19:11.690 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:11.690 "strip_size_kb": 64, 00:19:11.690 "state": "online", 00:19:11.690 "raid_level": "raid5f", 00:19:11.690 "superblock": true, 00:19:11.690 "num_base_bdevs": 3, 00:19:11.690 "num_base_bdevs_discovered": 2, 00:19:11.690 "num_base_bdevs_operational": 2, 00:19:11.690 "base_bdevs_list": [ 00:19:11.690 { 00:19:11.690 "name": null, 00:19:11.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.690 "is_configured": false, 00:19:11.690 "data_offset": 0, 00:19:11.690 "data_size": 63488 00:19:11.690 }, 00:19:11.690 { 00:19:11.690 "name": "BaseBdev2", 00:19:11.690 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:11.690 "is_configured": true, 00:19:11.690 "data_offset": 2048, 00:19:11.690 "data_size": 63488 00:19:11.690 }, 00:19:11.690 { 00:19:11.690 "name": "BaseBdev3", 00:19:11.690 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:11.690 "is_configured": true, 00:19:11.690 "data_offset": 2048, 00:19:11.690 "data_size": 63488 00:19:11.690 } 00:19:11.690 ] 00:19:11.690 }' 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.690 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.949 [2024-09-27 22:36:07.584184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.949 [2024-09-27 22:36:07.584367] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:11.949 [2024-09-27 22:36:07.584385] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:11.949 request: 00:19:11.949 { 00:19:11.949 "base_bdev": "BaseBdev1", 00:19:11.949 "raid_bdev": "raid_bdev1", 00:19:11.949 "method": "bdev_raid_add_base_bdev", 00:19:11.949 "req_id": 1 00:19:11.949 } 00:19:11.949 Got JSON-RPC error response 00:19:11.949 response: 00:19:11.949 { 00:19:11.949 "code": -22, 00:19:11.949 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:11.949 } 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.949 22:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.925 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.925 "name": "raid_bdev1", 00:19:12.925 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:12.925 "strip_size_kb": 64, 00:19:12.925 "state": "online", 00:19:12.925 "raid_level": "raid5f", 00:19:12.925 "superblock": true, 00:19:12.926 "num_base_bdevs": 3, 00:19:12.926 "num_base_bdevs_discovered": 2, 00:19:12.926 "num_base_bdevs_operational": 2, 00:19:12.926 "base_bdevs_list": [ 00:19:12.926 { 00:19:12.926 "name": null, 00:19:12.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.926 "is_configured": false, 00:19:12.926 "data_offset": 0, 00:19:12.926 "data_size": 63488 00:19:12.926 }, 00:19:12.926 { 00:19:12.926 "name": "BaseBdev2", 00:19:12.926 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:12.926 "is_configured": true, 00:19:12.926 "data_offset": 2048, 00:19:12.926 "data_size": 63488 00:19:12.926 }, 00:19:12.926 { 00:19:12.926 "name": "BaseBdev3", 00:19:12.926 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:12.926 "is_configured": true, 00:19:12.926 "data_offset": 2048, 00:19:12.926 "data_size": 63488 00:19:12.926 } 00:19:12.926 ] 00:19:12.926 }' 00:19:12.926 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.926 22:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.495 "name": "raid_bdev1", 00:19:13.495 "uuid": "8de2fb94-2e80-4447-bbd2-55ad318a9b7d", 00:19:13.495 "strip_size_kb": 64, 00:19:13.495 "state": "online", 00:19:13.495 "raid_level": "raid5f", 00:19:13.495 "superblock": true, 00:19:13.495 "num_base_bdevs": 3, 00:19:13.495 "num_base_bdevs_discovered": 2, 00:19:13.495 "num_base_bdevs_operational": 2, 00:19:13.495 "base_bdevs_list": [ 00:19:13.495 { 00:19:13.495 "name": null, 00:19:13.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.495 "is_configured": false, 00:19:13.495 "data_offset": 0, 00:19:13.495 "data_size": 63488 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "name": "BaseBdev2", 00:19:13.495 "uuid": "ed9c2ab5-6305-5d87-bed9-547822d421b8", 00:19:13.495 "is_configured": true, 00:19:13.495 "data_offset": 2048, 00:19:13.495 "data_size": 63488 00:19:13.495 }, 00:19:13.495 { 00:19:13.495 "name": "BaseBdev3", 00:19:13.495 "uuid": "e1afd2d0-8024-50ec-bbfe-69f6cc13bbbf", 00:19:13.495 "is_configured": true, 00:19:13.495 "data_offset": 2048, 00:19:13.495 "data_size": 63488 00:19:13.495 } 00:19:13.495 ] 00:19:13.495 }' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 83122 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83122 ']' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 83122 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83122 00:19:13.495 killing process with pid 83122 00:19:13.495 Received shutdown signal, test time was about 60.000000 seconds 00:19:13.495 00:19:13.495 Latency(us) 00:19:13.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.495 =================================================================================================================== 00:19:13.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83122' 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 83122 00:19:13.495 [2024-09-27 22:36:09.265037] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.495 [2024-09-27 22:36:09.265202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.495 22:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 83122 00:19:13.495 [2024-09-27 22:36:09.265282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.495 [2024-09-27 22:36:09.265300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:14.064 [2024-09-27 22:36:09.686165] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:15.967 22:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:15.967 00:19:15.967 real 0m24.702s 00:19:15.967 user 0m30.823s 00:19:15.967 sys 0m3.496s 00:19:15.967 22:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:15.967 22:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.967 ************************************ 00:19:15.967 END TEST raid5f_rebuild_test_sb 00:19:15.967 ************************************ 00:19:15.967 22:36:11 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:15.967 22:36:11 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:19:15.967 22:36:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:15.967 22:36:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:15.967 22:36:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.967 ************************************ 00:19:15.967 START TEST raid5f_state_function_test 00:19:15.967 ************************************ 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83881 00:19:15.967 Process raid pid: 83881 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83881' 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83881 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83881 ']' 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.967 22:36:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.226 [2024-09-27 22:36:11.933163] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:19:16.226 [2024-09-27 22:36:11.933331] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.485 [2024-09-27 22:36:12.111293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.485 [2024-09-27 22:36:12.347235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.743 [2024-09-27 22:36:12.589920] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.743 [2024-09-27 22:36:12.589968] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.312 [2024-09-27 22:36:13.079280] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:17.312 [2024-09-27 22:36:13.079353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:17.312 [2024-09-27 22:36:13.079364] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:17.312 [2024-09-27 22:36:13.079377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:17.312 [2024-09-27 22:36:13.079384] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:17.312 [2024-09-27 22:36:13.079399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:17.312 [2024-09-27 22:36:13.079407] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:17.312 [2024-09-27 22:36:13.079419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.312 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.312 "name": "Existed_Raid", 00:19:17.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.312 "strip_size_kb": 64, 00:19:17.312 "state": "configuring", 00:19:17.312 "raid_level": "raid5f", 00:19:17.312 "superblock": false, 00:19:17.312 "num_base_bdevs": 4, 00:19:17.312 "num_base_bdevs_discovered": 0, 00:19:17.312 "num_base_bdevs_operational": 4, 00:19:17.312 "base_bdevs_list": [ 00:19:17.312 { 00:19:17.312 "name": "BaseBdev1", 00:19:17.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.312 "is_configured": false, 00:19:17.312 "data_offset": 0, 00:19:17.312 "data_size": 0 00:19:17.312 }, 00:19:17.312 { 00:19:17.312 "name": "BaseBdev2", 00:19:17.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.312 "is_configured": false, 00:19:17.312 "data_offset": 0, 00:19:17.312 "data_size": 0 00:19:17.312 }, 00:19:17.312 { 00:19:17.312 "name": "BaseBdev3", 00:19:17.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.312 "is_configured": false, 00:19:17.312 "data_offset": 0, 00:19:17.312 "data_size": 0 00:19:17.313 }, 00:19:17.313 { 00:19:17.313 "name": "BaseBdev4", 00:19:17.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.313 "is_configured": false, 00:19:17.313 "data_offset": 0, 00:19:17.313 "data_size": 0 00:19:17.313 } 00:19:17.313 ] 00:19:17.313 }' 00:19:17.313 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.313 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.881 [2024-09-27 22:36:13.534608] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:17.881 [2024-09-27 22:36:13.534670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.881 [2024-09-27 22:36:13.542595] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:17.881 [2024-09-27 22:36:13.542650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:17.881 [2024-09-27 22:36:13.542660] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:17.881 [2024-09-27 22:36:13.542676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:17.881 [2024-09-27 22:36:13.542683] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:17.881 [2024-09-27 22:36:13.542696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:17.881 [2024-09-27 22:36:13.542703] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:17.881 [2024-09-27 22:36:13.542716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.881 [2024-09-27 22:36:13.591536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.881 BaseBdev1 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:17.881 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 [ 00:19:17.882 { 00:19:17.882 "name": "BaseBdev1", 00:19:17.882 "aliases": [ 00:19:17.882 "ec98a51d-ecae-41a3-ac78-581f6e9d3610" 00:19:17.882 ], 00:19:17.882 "product_name": "Malloc disk", 00:19:17.882 "block_size": 512, 00:19:17.882 "num_blocks": 65536, 00:19:17.882 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:17.882 "assigned_rate_limits": { 00:19:17.882 "rw_ios_per_sec": 0, 00:19:17.882 "rw_mbytes_per_sec": 0, 00:19:17.882 "r_mbytes_per_sec": 0, 00:19:17.882 "w_mbytes_per_sec": 0 00:19:17.882 }, 00:19:17.882 "claimed": true, 00:19:17.882 "claim_type": "exclusive_write", 00:19:17.882 "zoned": false, 00:19:17.882 "supported_io_types": { 00:19:17.882 "read": true, 00:19:17.882 "write": true, 00:19:17.882 "unmap": true, 00:19:17.882 "flush": true, 00:19:17.882 "reset": true, 00:19:17.882 "nvme_admin": false, 00:19:17.882 "nvme_io": false, 00:19:17.882 "nvme_io_md": false, 00:19:17.882 "write_zeroes": true, 00:19:17.882 "zcopy": true, 00:19:17.882 "get_zone_info": false, 00:19:17.882 "zone_management": false, 00:19:17.882 "zone_append": false, 00:19:17.882 "compare": false, 00:19:17.882 "compare_and_write": false, 00:19:17.882 "abort": true, 00:19:17.882 "seek_hole": false, 00:19:17.882 "seek_data": false, 00:19:17.882 "copy": true, 00:19:17.882 "nvme_iov_md": false 00:19:17.882 }, 00:19:17.882 "memory_domains": [ 00:19:17.882 { 00:19:17.882 "dma_device_id": "system", 00:19:17.882 "dma_device_type": 1 00:19:17.882 }, 00:19:17.882 { 00:19:17.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.882 "dma_device_type": 2 00:19:17.882 } 00:19:17.882 ], 00:19:17.882 "driver_specific": {} 00:19:17.882 } 00:19:17.882 ] 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.882 "name": "Existed_Raid", 00:19:17.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.882 "strip_size_kb": 64, 00:19:17.882 "state": "configuring", 00:19:17.882 "raid_level": "raid5f", 00:19:17.882 "superblock": false, 00:19:17.882 "num_base_bdevs": 4, 00:19:17.882 "num_base_bdevs_discovered": 1, 00:19:17.882 "num_base_bdevs_operational": 4, 00:19:17.882 "base_bdevs_list": [ 00:19:17.882 { 00:19:17.882 "name": "BaseBdev1", 00:19:17.882 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:17.882 "is_configured": true, 00:19:17.882 "data_offset": 0, 00:19:17.882 "data_size": 65536 00:19:17.882 }, 00:19:17.882 { 00:19:17.882 "name": "BaseBdev2", 00:19:17.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.882 "is_configured": false, 00:19:17.882 "data_offset": 0, 00:19:17.882 "data_size": 0 00:19:17.882 }, 00:19:17.882 { 00:19:17.882 "name": "BaseBdev3", 00:19:17.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.882 "is_configured": false, 00:19:17.882 "data_offset": 0, 00:19:17.882 "data_size": 0 00:19:17.882 }, 00:19:17.882 { 00:19:17.882 "name": "BaseBdev4", 00:19:17.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.882 "is_configured": false, 00:19:17.882 "data_offset": 0, 00:19:17.882 "data_size": 0 00:19:17.882 } 00:19:17.882 ] 00:19:17.882 }' 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.882 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 22:36:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:18.141 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 22:36:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 [2024-09-27 22:36:14.003280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:18.141 [2024-09-27 22:36:14.003425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 [2024-09-27 22:36:14.011202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.141 [2024-09-27 22:36:14.013469] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:18.141 [2024-09-27 22:36:14.013523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:18.141 [2024-09-27 22:36:14.013535] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:18.141 [2024-09-27 22:36:14.013551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:18.141 [2024-09-27 22:36:14.013559] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:18.141 [2024-09-27 22:36:14.013572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.141 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.401 "name": "Existed_Raid", 00:19:18.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.401 "strip_size_kb": 64, 00:19:18.401 "state": "configuring", 00:19:18.401 "raid_level": "raid5f", 00:19:18.401 "superblock": false, 00:19:18.401 "num_base_bdevs": 4, 00:19:18.401 "num_base_bdevs_discovered": 1, 00:19:18.401 "num_base_bdevs_operational": 4, 00:19:18.401 "base_bdevs_list": [ 00:19:18.401 { 00:19:18.401 "name": "BaseBdev1", 00:19:18.401 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:18.401 "is_configured": true, 00:19:18.401 "data_offset": 0, 00:19:18.401 "data_size": 65536 00:19:18.401 }, 00:19:18.401 { 00:19:18.401 "name": "BaseBdev2", 00:19:18.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.401 "is_configured": false, 00:19:18.401 "data_offset": 0, 00:19:18.401 "data_size": 0 00:19:18.401 }, 00:19:18.401 { 00:19:18.401 "name": "BaseBdev3", 00:19:18.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.401 "is_configured": false, 00:19:18.401 "data_offset": 0, 00:19:18.401 "data_size": 0 00:19:18.401 }, 00:19:18.401 { 00:19:18.401 "name": "BaseBdev4", 00:19:18.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.401 "is_configured": false, 00:19:18.401 "data_offset": 0, 00:19:18.401 "data_size": 0 00:19:18.401 } 00:19:18.401 ] 00:19:18.401 }' 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.401 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 [2024-09-27 22:36:14.401854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.661 BaseBdev2 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 [ 00:19:18.661 { 00:19:18.661 "name": "BaseBdev2", 00:19:18.661 "aliases": [ 00:19:18.661 "771654db-179a-41a6-9967-869ca6a0887f" 00:19:18.661 ], 00:19:18.661 "product_name": "Malloc disk", 00:19:18.661 "block_size": 512, 00:19:18.661 "num_blocks": 65536, 00:19:18.661 "uuid": "771654db-179a-41a6-9967-869ca6a0887f", 00:19:18.661 "assigned_rate_limits": { 00:19:18.661 "rw_ios_per_sec": 0, 00:19:18.661 "rw_mbytes_per_sec": 0, 00:19:18.661 "r_mbytes_per_sec": 0, 00:19:18.661 "w_mbytes_per_sec": 0 00:19:18.661 }, 00:19:18.661 "claimed": true, 00:19:18.661 "claim_type": "exclusive_write", 00:19:18.661 "zoned": false, 00:19:18.661 "supported_io_types": { 00:19:18.661 "read": true, 00:19:18.661 "write": true, 00:19:18.661 "unmap": true, 00:19:18.661 "flush": true, 00:19:18.661 "reset": true, 00:19:18.661 "nvme_admin": false, 00:19:18.661 "nvme_io": false, 00:19:18.661 "nvme_io_md": false, 00:19:18.661 "write_zeroes": true, 00:19:18.661 "zcopy": true, 00:19:18.661 "get_zone_info": false, 00:19:18.661 "zone_management": false, 00:19:18.661 "zone_append": false, 00:19:18.661 "compare": false, 00:19:18.661 "compare_and_write": false, 00:19:18.661 "abort": true, 00:19:18.661 "seek_hole": false, 00:19:18.661 "seek_data": false, 00:19:18.661 "copy": true, 00:19:18.661 "nvme_iov_md": false 00:19:18.661 }, 00:19:18.661 "memory_domains": [ 00:19:18.661 { 00:19:18.661 "dma_device_id": "system", 00:19:18.661 "dma_device_type": 1 00:19:18.661 }, 00:19:18.661 { 00:19:18.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.661 "dma_device_type": 2 00:19:18.661 } 00:19:18.661 ], 00:19:18.661 "driver_specific": {} 00:19:18.661 } 00:19:18.661 ] 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.661 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.662 "name": "Existed_Raid", 00:19:18.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.662 "strip_size_kb": 64, 00:19:18.662 "state": "configuring", 00:19:18.662 "raid_level": "raid5f", 00:19:18.662 "superblock": false, 00:19:18.662 "num_base_bdevs": 4, 00:19:18.662 "num_base_bdevs_discovered": 2, 00:19:18.662 "num_base_bdevs_operational": 4, 00:19:18.662 "base_bdevs_list": [ 00:19:18.662 { 00:19:18.662 "name": "BaseBdev1", 00:19:18.662 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:18.662 "is_configured": true, 00:19:18.662 "data_offset": 0, 00:19:18.662 "data_size": 65536 00:19:18.662 }, 00:19:18.662 { 00:19:18.662 "name": "BaseBdev2", 00:19:18.662 "uuid": "771654db-179a-41a6-9967-869ca6a0887f", 00:19:18.662 "is_configured": true, 00:19:18.662 "data_offset": 0, 00:19:18.662 "data_size": 65536 00:19:18.662 }, 00:19:18.662 { 00:19:18.662 "name": "BaseBdev3", 00:19:18.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.662 "is_configured": false, 00:19:18.662 "data_offset": 0, 00:19:18.662 "data_size": 0 00:19:18.662 }, 00:19:18.662 { 00:19:18.662 "name": "BaseBdev4", 00:19:18.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.662 "is_configured": false, 00:19:18.662 "data_offset": 0, 00:19:18.662 "data_size": 0 00:19:18.662 } 00:19:18.662 ] 00:19:18.662 }' 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.662 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.230 [2024-09-27 22:36:14.923698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.230 BaseBdev3 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.230 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.230 [ 00:19:19.230 { 00:19:19.230 "name": "BaseBdev3", 00:19:19.230 "aliases": [ 00:19:19.230 "40841e92-bd35-4786-b57e-2c7a0800bc8f" 00:19:19.230 ], 00:19:19.230 "product_name": "Malloc disk", 00:19:19.230 "block_size": 512, 00:19:19.230 "num_blocks": 65536, 00:19:19.230 "uuid": "40841e92-bd35-4786-b57e-2c7a0800bc8f", 00:19:19.230 "assigned_rate_limits": { 00:19:19.230 "rw_ios_per_sec": 0, 00:19:19.230 "rw_mbytes_per_sec": 0, 00:19:19.230 "r_mbytes_per_sec": 0, 00:19:19.230 "w_mbytes_per_sec": 0 00:19:19.230 }, 00:19:19.230 "claimed": true, 00:19:19.230 "claim_type": "exclusive_write", 00:19:19.230 "zoned": false, 00:19:19.230 "supported_io_types": { 00:19:19.230 "read": true, 00:19:19.230 "write": true, 00:19:19.230 "unmap": true, 00:19:19.230 "flush": true, 00:19:19.230 "reset": true, 00:19:19.230 "nvme_admin": false, 00:19:19.230 "nvme_io": false, 00:19:19.230 "nvme_io_md": false, 00:19:19.230 "write_zeroes": true, 00:19:19.230 "zcopy": true, 00:19:19.230 "get_zone_info": false, 00:19:19.230 "zone_management": false, 00:19:19.230 "zone_append": false, 00:19:19.230 "compare": false, 00:19:19.230 "compare_and_write": false, 00:19:19.230 "abort": true, 00:19:19.230 "seek_hole": false, 00:19:19.230 "seek_data": false, 00:19:19.230 "copy": true, 00:19:19.230 "nvme_iov_md": false 00:19:19.230 }, 00:19:19.231 "memory_domains": [ 00:19:19.231 { 00:19:19.231 "dma_device_id": "system", 00:19:19.231 "dma_device_type": 1 00:19:19.231 }, 00:19:19.231 { 00:19:19.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.231 "dma_device_type": 2 00:19:19.231 } 00:19:19.231 ], 00:19:19.231 "driver_specific": {} 00:19:19.231 } 00:19:19.231 ] 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.231 22:36:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.231 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.231 "name": "Existed_Raid", 00:19:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.231 "strip_size_kb": 64, 00:19:19.231 "state": "configuring", 00:19:19.231 "raid_level": "raid5f", 00:19:19.231 "superblock": false, 00:19:19.231 "num_base_bdevs": 4, 00:19:19.231 "num_base_bdevs_discovered": 3, 00:19:19.231 "num_base_bdevs_operational": 4, 00:19:19.231 "base_bdevs_list": [ 00:19:19.231 { 00:19:19.231 "name": "BaseBdev1", 00:19:19.231 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:19.231 "is_configured": true, 00:19:19.231 "data_offset": 0, 00:19:19.231 "data_size": 65536 00:19:19.231 }, 00:19:19.231 { 00:19:19.231 "name": "BaseBdev2", 00:19:19.231 "uuid": "771654db-179a-41a6-9967-869ca6a0887f", 00:19:19.231 "is_configured": true, 00:19:19.231 "data_offset": 0, 00:19:19.231 "data_size": 65536 00:19:19.231 }, 00:19:19.231 { 00:19:19.231 "name": "BaseBdev3", 00:19:19.231 "uuid": "40841e92-bd35-4786-b57e-2c7a0800bc8f", 00:19:19.231 "is_configured": true, 00:19:19.231 "data_offset": 0, 00:19:19.231 "data_size": 65536 00:19:19.231 }, 00:19:19.231 { 00:19:19.231 "name": "BaseBdev4", 00:19:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.231 "is_configured": false, 00:19:19.231 "data_offset": 0, 00:19:19.231 "data_size": 0 00:19:19.231 } 00:19:19.231 ] 00:19:19.231 }' 00:19:19.231 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.231 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.799 [2024-09-27 22:36:15.444218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.799 [2024-09-27 22:36:15.444319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:19.799 [2024-09-27 22:36:15.444335] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:19.799 [2024-09-27 22:36:15.444634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:19.799 [2024-09-27 22:36:15.453469] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:19.799 [2024-09-27 22:36:15.453506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:19.799 BaseBdev4 00:19:19.799 [2024-09-27 22:36:15.453812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.799 [ 00:19:19.799 { 00:19:19.799 "name": "BaseBdev4", 00:19:19.799 "aliases": [ 00:19:19.799 "1e50917d-5215-49a4-869e-6cad6e66c47f" 00:19:19.799 ], 00:19:19.799 "product_name": "Malloc disk", 00:19:19.799 "block_size": 512, 00:19:19.799 "num_blocks": 65536, 00:19:19.799 "uuid": "1e50917d-5215-49a4-869e-6cad6e66c47f", 00:19:19.799 "assigned_rate_limits": { 00:19:19.799 "rw_ios_per_sec": 0, 00:19:19.799 "rw_mbytes_per_sec": 0, 00:19:19.799 "r_mbytes_per_sec": 0, 00:19:19.799 "w_mbytes_per_sec": 0 00:19:19.799 }, 00:19:19.799 "claimed": true, 00:19:19.799 "claim_type": "exclusive_write", 00:19:19.799 "zoned": false, 00:19:19.799 "supported_io_types": { 00:19:19.799 "read": true, 00:19:19.799 "write": true, 00:19:19.799 "unmap": true, 00:19:19.799 "flush": true, 00:19:19.799 "reset": true, 00:19:19.799 "nvme_admin": false, 00:19:19.799 "nvme_io": false, 00:19:19.799 "nvme_io_md": false, 00:19:19.799 "write_zeroes": true, 00:19:19.799 "zcopy": true, 00:19:19.799 "get_zone_info": false, 00:19:19.799 "zone_management": false, 00:19:19.799 "zone_append": false, 00:19:19.799 "compare": false, 00:19:19.799 "compare_and_write": false, 00:19:19.799 "abort": true, 00:19:19.799 "seek_hole": false, 00:19:19.799 "seek_data": false, 00:19:19.799 "copy": true, 00:19:19.799 "nvme_iov_md": false 00:19:19.799 }, 00:19:19.799 "memory_domains": [ 00:19:19.799 { 00:19:19.799 "dma_device_id": "system", 00:19:19.799 "dma_device_type": 1 00:19:19.799 }, 00:19:19.799 { 00:19:19.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.799 "dma_device_type": 2 00:19:19.799 } 00:19:19.799 ], 00:19:19.799 "driver_specific": {} 00:19:19.799 } 00:19:19.799 ] 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.799 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.799 "name": "Existed_Raid", 00:19:19.799 "uuid": "075204c7-ac1a-4fd7-91cb-1ca0eeca112a", 00:19:19.799 "strip_size_kb": 64, 00:19:19.799 "state": "online", 00:19:19.799 "raid_level": "raid5f", 00:19:19.799 "superblock": false, 00:19:19.799 "num_base_bdevs": 4, 00:19:19.799 "num_base_bdevs_discovered": 4, 00:19:19.799 "num_base_bdevs_operational": 4, 00:19:19.799 "base_bdevs_list": [ 00:19:19.800 { 00:19:19.800 "name": "BaseBdev1", 00:19:19.800 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:19.800 "is_configured": true, 00:19:19.800 "data_offset": 0, 00:19:19.800 "data_size": 65536 00:19:19.800 }, 00:19:19.800 { 00:19:19.800 "name": "BaseBdev2", 00:19:19.800 "uuid": "771654db-179a-41a6-9967-869ca6a0887f", 00:19:19.800 "is_configured": true, 00:19:19.800 "data_offset": 0, 00:19:19.800 "data_size": 65536 00:19:19.800 }, 00:19:19.800 { 00:19:19.800 "name": "BaseBdev3", 00:19:19.800 "uuid": "40841e92-bd35-4786-b57e-2c7a0800bc8f", 00:19:19.800 "is_configured": true, 00:19:19.800 "data_offset": 0, 00:19:19.800 "data_size": 65536 00:19:19.800 }, 00:19:19.800 { 00:19:19.800 "name": "BaseBdev4", 00:19:19.800 "uuid": "1e50917d-5215-49a4-869e-6cad6e66c47f", 00:19:19.800 "is_configured": true, 00:19:19.800 "data_offset": 0, 00:19:19.800 "data_size": 65536 00:19:19.800 } 00:19:19.800 ] 00:19:19.800 }' 00:19:19.800 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.800 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:20.108 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.108 [2024-09-27 22:36:15.953110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.372 22:36:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.372 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.372 "name": "Existed_Raid", 00:19:20.372 "aliases": [ 00:19:20.372 "075204c7-ac1a-4fd7-91cb-1ca0eeca112a" 00:19:20.372 ], 00:19:20.372 "product_name": "Raid Volume", 00:19:20.372 "block_size": 512, 00:19:20.372 "num_blocks": 196608, 00:19:20.372 "uuid": "075204c7-ac1a-4fd7-91cb-1ca0eeca112a", 00:19:20.372 "assigned_rate_limits": { 00:19:20.372 "rw_ios_per_sec": 0, 00:19:20.372 "rw_mbytes_per_sec": 0, 00:19:20.372 "r_mbytes_per_sec": 0, 00:19:20.372 "w_mbytes_per_sec": 0 00:19:20.372 }, 00:19:20.372 "claimed": false, 00:19:20.372 "zoned": false, 00:19:20.372 "supported_io_types": { 00:19:20.372 "read": true, 00:19:20.372 "write": true, 00:19:20.372 "unmap": false, 00:19:20.372 "flush": false, 00:19:20.372 "reset": true, 00:19:20.372 "nvme_admin": false, 00:19:20.372 "nvme_io": false, 00:19:20.372 "nvme_io_md": false, 00:19:20.372 "write_zeroes": true, 00:19:20.372 "zcopy": false, 00:19:20.372 "get_zone_info": false, 00:19:20.372 "zone_management": false, 00:19:20.372 "zone_append": false, 00:19:20.372 "compare": false, 00:19:20.372 "compare_and_write": false, 00:19:20.372 "abort": false, 00:19:20.372 "seek_hole": false, 00:19:20.372 "seek_data": false, 00:19:20.372 "copy": false, 00:19:20.372 "nvme_iov_md": false 00:19:20.372 }, 00:19:20.373 "driver_specific": { 00:19:20.373 "raid": { 00:19:20.373 "uuid": "075204c7-ac1a-4fd7-91cb-1ca0eeca112a", 00:19:20.373 "strip_size_kb": 64, 00:19:20.373 "state": "online", 00:19:20.373 "raid_level": "raid5f", 00:19:20.373 "superblock": false, 00:19:20.373 "num_base_bdevs": 4, 00:19:20.373 "num_base_bdevs_discovered": 4, 00:19:20.373 "num_base_bdevs_operational": 4, 00:19:20.373 "base_bdevs_list": [ 00:19:20.373 { 00:19:20.373 "name": "BaseBdev1", 00:19:20.373 "uuid": "ec98a51d-ecae-41a3-ac78-581f6e9d3610", 00:19:20.373 "is_configured": true, 00:19:20.373 "data_offset": 0, 00:19:20.373 "data_size": 65536 00:19:20.373 }, 00:19:20.373 { 00:19:20.373 "name": "BaseBdev2", 00:19:20.373 "uuid": "771654db-179a-41a6-9967-869ca6a0887f", 00:19:20.373 "is_configured": true, 00:19:20.373 "data_offset": 0, 00:19:20.373 "data_size": 65536 00:19:20.373 }, 00:19:20.373 { 00:19:20.373 "name": "BaseBdev3", 00:19:20.373 "uuid": "40841e92-bd35-4786-b57e-2c7a0800bc8f", 00:19:20.373 "is_configured": true, 00:19:20.373 "data_offset": 0, 00:19:20.373 "data_size": 65536 00:19:20.373 }, 00:19:20.373 { 00:19:20.373 "name": "BaseBdev4", 00:19:20.373 "uuid": "1e50917d-5215-49a4-869e-6cad6e66c47f", 00:19:20.373 "is_configured": true, 00:19:20.373 "data_offset": 0, 00:19:20.373 "data_size": 65536 00:19:20.373 } 00:19:20.373 ] 00:19:20.373 } 00:19:20.373 } 00:19:20.373 }' 00:19:20.373 22:36:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:20.373 BaseBdev2 00:19:20.373 BaseBdev3 00:19:20.373 BaseBdev4' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.373 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 [2024-09-27 22:36:16.264569] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.632 "name": "Existed_Raid", 00:19:20.632 "uuid": "075204c7-ac1a-4fd7-91cb-1ca0eeca112a", 00:19:20.632 "strip_size_kb": 64, 00:19:20.632 "state": "online", 00:19:20.632 "raid_level": "raid5f", 00:19:20.632 "superblock": false, 00:19:20.632 "num_base_bdevs": 4, 00:19:20.632 "num_base_bdevs_discovered": 3, 00:19:20.632 "num_base_bdevs_operational": 3, 00:19:20.632 "base_bdevs_list": [ 00:19:20.632 { 00:19:20.632 "name": null, 00:19:20.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.632 "is_configured": false, 00:19:20.632 "data_offset": 0, 00:19:20.632 "data_size": 65536 00:19:20.632 }, 00:19:20.632 { 00:19:20.632 "name": "BaseBdev2", 00:19:20.632 "uuid": "771654db-179a-41a6-9967-869ca6a0887f", 00:19:20.632 "is_configured": true, 00:19:20.632 "data_offset": 0, 00:19:20.632 "data_size": 65536 00:19:20.632 }, 00:19:20.632 { 00:19:20.632 "name": "BaseBdev3", 00:19:20.632 "uuid": "40841e92-bd35-4786-b57e-2c7a0800bc8f", 00:19:20.632 "is_configured": true, 00:19:20.632 "data_offset": 0, 00:19:20.632 "data_size": 65536 00:19:20.632 }, 00:19:20.632 { 00:19:20.632 "name": "BaseBdev4", 00:19:20.632 "uuid": "1e50917d-5215-49a4-869e-6cad6e66c47f", 00:19:20.632 "is_configured": true, 00:19:20.632 "data_offset": 0, 00:19:20.632 "data_size": 65536 00:19:20.632 } 00:19:20.632 ] 00:19:20.632 }' 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.632 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.201 [2024-09-27 22:36:16.866217] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:21.201 [2024-09-27 22:36:16.866321] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.201 [2024-09-27 22:36:16.964496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.201 22:36:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.201 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:21.201 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.201 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:21.201 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.201 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.201 [2024-09-27 22:36:17.020439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.459 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.459 [2024-09-27 22:36:17.174216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:21.460 [2024-09-27 22:36:17.174280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.460 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 BaseBdev2 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 [ 00:19:21.720 { 00:19:21.720 "name": "BaseBdev2", 00:19:21.720 "aliases": [ 00:19:21.720 "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d" 00:19:21.720 ], 00:19:21.720 "product_name": "Malloc disk", 00:19:21.720 "block_size": 512, 00:19:21.720 "num_blocks": 65536, 00:19:21.720 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:21.720 "assigned_rate_limits": { 00:19:21.720 "rw_ios_per_sec": 0, 00:19:21.720 "rw_mbytes_per_sec": 0, 00:19:21.720 "r_mbytes_per_sec": 0, 00:19:21.720 "w_mbytes_per_sec": 0 00:19:21.720 }, 00:19:21.720 "claimed": false, 00:19:21.720 "zoned": false, 00:19:21.720 "supported_io_types": { 00:19:21.720 "read": true, 00:19:21.720 "write": true, 00:19:21.720 "unmap": true, 00:19:21.720 "flush": true, 00:19:21.720 "reset": true, 00:19:21.720 "nvme_admin": false, 00:19:21.720 "nvme_io": false, 00:19:21.720 "nvme_io_md": false, 00:19:21.720 "write_zeroes": true, 00:19:21.720 "zcopy": true, 00:19:21.720 "get_zone_info": false, 00:19:21.720 "zone_management": false, 00:19:21.720 "zone_append": false, 00:19:21.720 "compare": false, 00:19:21.720 "compare_and_write": false, 00:19:21.720 "abort": true, 00:19:21.720 "seek_hole": false, 00:19:21.720 "seek_data": false, 00:19:21.720 "copy": true, 00:19:21.720 "nvme_iov_md": false 00:19:21.720 }, 00:19:21.720 "memory_domains": [ 00:19:21.720 { 00:19:21.720 "dma_device_id": "system", 00:19:21.720 "dma_device_type": 1 00:19:21.720 }, 00:19:21.720 { 00:19:21.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.720 "dma_device_type": 2 00:19:21.720 } 00:19:21.720 ], 00:19:21.720 "driver_specific": {} 00:19:21.720 } 00:19:21.720 ] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 BaseBdev3 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 [ 00:19:21.720 { 00:19:21.720 "name": "BaseBdev3", 00:19:21.720 "aliases": [ 00:19:21.720 "ff5d8392-4002-44a5-a7aa-392eaffa5bff" 00:19:21.720 ], 00:19:21.720 "product_name": "Malloc disk", 00:19:21.720 "block_size": 512, 00:19:21.720 "num_blocks": 65536, 00:19:21.720 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:21.720 "assigned_rate_limits": { 00:19:21.720 "rw_ios_per_sec": 0, 00:19:21.720 "rw_mbytes_per_sec": 0, 00:19:21.720 "r_mbytes_per_sec": 0, 00:19:21.720 "w_mbytes_per_sec": 0 00:19:21.720 }, 00:19:21.720 "claimed": false, 00:19:21.720 "zoned": false, 00:19:21.720 "supported_io_types": { 00:19:21.720 "read": true, 00:19:21.720 "write": true, 00:19:21.720 "unmap": true, 00:19:21.720 "flush": true, 00:19:21.720 "reset": true, 00:19:21.720 "nvme_admin": false, 00:19:21.720 "nvme_io": false, 00:19:21.720 "nvme_io_md": false, 00:19:21.720 "write_zeroes": true, 00:19:21.720 "zcopy": true, 00:19:21.720 "get_zone_info": false, 00:19:21.720 "zone_management": false, 00:19:21.720 "zone_append": false, 00:19:21.720 "compare": false, 00:19:21.720 "compare_and_write": false, 00:19:21.720 "abort": true, 00:19:21.720 "seek_hole": false, 00:19:21.720 "seek_data": false, 00:19:21.720 "copy": true, 00:19:21.720 "nvme_iov_md": false 00:19:21.720 }, 00:19:21.720 "memory_domains": [ 00:19:21.720 { 00:19:21.720 "dma_device_id": "system", 00:19:21.720 "dma_device_type": 1 00:19:21.720 }, 00:19:21.720 { 00:19:21.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.720 "dma_device_type": 2 00:19:21.720 } 00:19:21.720 ], 00:19:21.720 "driver_specific": {} 00:19:21.720 } 00:19:21.720 ] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 BaseBdev4 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.720 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.720 [ 00:19:21.720 { 00:19:21.720 "name": "BaseBdev4", 00:19:21.720 "aliases": [ 00:19:21.720 "d81b6c2b-d297-4c82-a006-d783b47f0d58" 00:19:21.720 ], 00:19:21.720 "product_name": "Malloc disk", 00:19:21.720 "block_size": 512, 00:19:21.721 "num_blocks": 65536, 00:19:21.721 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:21.721 "assigned_rate_limits": { 00:19:21.721 "rw_ios_per_sec": 0, 00:19:21.721 "rw_mbytes_per_sec": 0, 00:19:21.721 "r_mbytes_per_sec": 0, 00:19:21.721 "w_mbytes_per_sec": 0 00:19:21.721 }, 00:19:21.721 "claimed": false, 00:19:21.721 "zoned": false, 00:19:21.721 "supported_io_types": { 00:19:21.721 "read": true, 00:19:21.721 "write": true, 00:19:21.721 "unmap": true, 00:19:21.721 "flush": true, 00:19:21.721 "reset": true, 00:19:21.721 "nvme_admin": false, 00:19:21.721 "nvme_io": false, 00:19:21.721 "nvme_io_md": false, 00:19:21.721 "write_zeroes": true, 00:19:21.721 "zcopy": true, 00:19:21.721 "get_zone_info": false, 00:19:21.721 "zone_management": false, 00:19:21.721 "zone_append": false, 00:19:21.721 "compare": false, 00:19:21.721 "compare_and_write": false, 00:19:21.721 "abort": true, 00:19:21.721 "seek_hole": false, 00:19:21.721 "seek_data": false, 00:19:21.721 "copy": true, 00:19:21.721 "nvme_iov_md": false 00:19:21.721 }, 00:19:21.721 "memory_domains": [ 00:19:21.721 { 00:19:21.721 "dma_device_id": "system", 00:19:21.980 "dma_device_type": 1 00:19:21.980 }, 00:19:21.980 { 00:19:21.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.980 "dma_device_type": 2 00:19:21.980 } 00:19:21.980 ], 00:19:21.980 "driver_specific": {} 00:19:21.980 } 00:19:21.980 ] 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.980 [2024-09-27 22:36:17.605937] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:21.980 [2024-09-27 22:36:17.606009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:21.980 [2024-09-27 22:36:17.606053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.980 [2024-09-27 22:36:17.608322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:21.980 [2024-09-27 22:36:17.608387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.980 "name": "Existed_Raid", 00:19:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.980 "strip_size_kb": 64, 00:19:21.980 "state": "configuring", 00:19:21.980 "raid_level": "raid5f", 00:19:21.980 "superblock": false, 00:19:21.980 "num_base_bdevs": 4, 00:19:21.980 "num_base_bdevs_discovered": 3, 00:19:21.980 "num_base_bdevs_operational": 4, 00:19:21.980 "base_bdevs_list": [ 00:19:21.980 { 00:19:21.980 "name": "BaseBdev1", 00:19:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.980 "is_configured": false, 00:19:21.980 "data_offset": 0, 00:19:21.980 "data_size": 0 00:19:21.980 }, 00:19:21.980 { 00:19:21.980 "name": "BaseBdev2", 00:19:21.980 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:21.980 "is_configured": true, 00:19:21.980 "data_offset": 0, 00:19:21.980 "data_size": 65536 00:19:21.980 }, 00:19:21.980 { 00:19:21.980 "name": "BaseBdev3", 00:19:21.980 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:21.980 "is_configured": true, 00:19:21.980 "data_offset": 0, 00:19:21.980 "data_size": 65536 00:19:21.980 }, 00:19:21.980 { 00:19:21.980 "name": "BaseBdev4", 00:19:21.980 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:21.980 "is_configured": true, 00:19:21.980 "data_offset": 0, 00:19:21.980 "data_size": 65536 00:19:21.980 } 00:19:21.980 ] 00:19:21.980 }' 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.980 22:36:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.239 [2024-09-27 22:36:18.037323] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.239 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.240 "name": "Existed_Raid", 00:19:22.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.240 "strip_size_kb": 64, 00:19:22.240 "state": "configuring", 00:19:22.240 "raid_level": "raid5f", 00:19:22.240 "superblock": false, 00:19:22.240 "num_base_bdevs": 4, 00:19:22.240 "num_base_bdevs_discovered": 2, 00:19:22.240 "num_base_bdevs_operational": 4, 00:19:22.240 "base_bdevs_list": [ 00:19:22.240 { 00:19:22.240 "name": "BaseBdev1", 00:19:22.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.240 "is_configured": false, 00:19:22.240 "data_offset": 0, 00:19:22.240 "data_size": 0 00:19:22.240 }, 00:19:22.240 { 00:19:22.240 "name": null, 00:19:22.240 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:22.240 "is_configured": false, 00:19:22.240 "data_offset": 0, 00:19:22.240 "data_size": 65536 00:19:22.240 }, 00:19:22.240 { 00:19:22.240 "name": "BaseBdev3", 00:19:22.240 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:22.240 "is_configured": true, 00:19:22.240 "data_offset": 0, 00:19:22.240 "data_size": 65536 00:19:22.240 }, 00:19:22.240 { 00:19:22.240 "name": "BaseBdev4", 00:19:22.240 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:22.240 "is_configured": true, 00:19:22.240 "data_offset": 0, 00:19:22.240 "data_size": 65536 00:19:22.240 } 00:19:22.240 ] 00:19:22.240 }' 00:19:22.240 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.240 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 [2024-09-27 22:36:18.533444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.807 BaseBdev1 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.807 [ 00:19:22.807 { 00:19:22.807 "name": "BaseBdev1", 00:19:22.807 "aliases": [ 00:19:22.807 "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa" 00:19:22.807 ], 00:19:22.807 "product_name": "Malloc disk", 00:19:22.807 "block_size": 512, 00:19:22.807 "num_blocks": 65536, 00:19:22.807 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:22.807 "assigned_rate_limits": { 00:19:22.807 "rw_ios_per_sec": 0, 00:19:22.807 "rw_mbytes_per_sec": 0, 00:19:22.807 "r_mbytes_per_sec": 0, 00:19:22.807 "w_mbytes_per_sec": 0 00:19:22.807 }, 00:19:22.807 "claimed": true, 00:19:22.807 "claim_type": "exclusive_write", 00:19:22.807 "zoned": false, 00:19:22.807 "supported_io_types": { 00:19:22.807 "read": true, 00:19:22.807 "write": true, 00:19:22.807 "unmap": true, 00:19:22.807 "flush": true, 00:19:22.807 "reset": true, 00:19:22.807 "nvme_admin": false, 00:19:22.807 "nvme_io": false, 00:19:22.807 "nvme_io_md": false, 00:19:22.807 "write_zeroes": true, 00:19:22.807 "zcopy": true, 00:19:22.807 "get_zone_info": false, 00:19:22.807 "zone_management": false, 00:19:22.807 "zone_append": false, 00:19:22.807 "compare": false, 00:19:22.807 "compare_and_write": false, 00:19:22.807 "abort": true, 00:19:22.807 "seek_hole": false, 00:19:22.807 "seek_data": false, 00:19:22.807 "copy": true, 00:19:22.807 "nvme_iov_md": false 00:19:22.807 }, 00:19:22.807 "memory_domains": [ 00:19:22.807 { 00:19:22.807 "dma_device_id": "system", 00:19:22.807 "dma_device_type": 1 00:19:22.807 }, 00:19:22.807 { 00:19:22.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.807 "dma_device_type": 2 00:19:22.807 } 00:19:22.807 ], 00:19:22.807 "driver_specific": {} 00:19:22.807 } 00:19:22.807 ] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.807 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.808 "name": "Existed_Raid", 00:19:22.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.808 "strip_size_kb": 64, 00:19:22.808 "state": "configuring", 00:19:22.808 "raid_level": "raid5f", 00:19:22.808 "superblock": false, 00:19:22.808 "num_base_bdevs": 4, 00:19:22.808 "num_base_bdevs_discovered": 3, 00:19:22.808 "num_base_bdevs_operational": 4, 00:19:22.808 "base_bdevs_list": [ 00:19:22.808 { 00:19:22.808 "name": "BaseBdev1", 00:19:22.808 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:22.808 "is_configured": true, 00:19:22.808 "data_offset": 0, 00:19:22.808 "data_size": 65536 00:19:22.808 }, 00:19:22.808 { 00:19:22.808 "name": null, 00:19:22.808 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:22.808 "is_configured": false, 00:19:22.808 "data_offset": 0, 00:19:22.808 "data_size": 65536 00:19:22.808 }, 00:19:22.808 { 00:19:22.808 "name": "BaseBdev3", 00:19:22.808 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:22.808 "is_configured": true, 00:19:22.808 "data_offset": 0, 00:19:22.808 "data_size": 65536 00:19:22.808 }, 00:19:22.808 { 00:19:22.808 "name": "BaseBdev4", 00:19:22.808 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:22.808 "is_configured": true, 00:19:22.808 "data_offset": 0, 00:19:22.808 "data_size": 65536 00:19:22.808 } 00:19:22.808 ] 00:19:22.808 }' 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.808 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.375 [2024-09-27 22:36:18.985123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.375 22:36:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.376 22:36:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.376 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.376 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.376 "name": "Existed_Raid", 00:19:23.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.376 "strip_size_kb": 64, 00:19:23.376 "state": "configuring", 00:19:23.376 "raid_level": "raid5f", 00:19:23.376 "superblock": false, 00:19:23.376 "num_base_bdevs": 4, 00:19:23.376 "num_base_bdevs_discovered": 2, 00:19:23.376 "num_base_bdevs_operational": 4, 00:19:23.376 "base_bdevs_list": [ 00:19:23.376 { 00:19:23.376 "name": "BaseBdev1", 00:19:23.376 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:23.376 "is_configured": true, 00:19:23.376 "data_offset": 0, 00:19:23.376 "data_size": 65536 00:19:23.376 }, 00:19:23.376 { 00:19:23.376 "name": null, 00:19:23.376 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:23.376 "is_configured": false, 00:19:23.376 "data_offset": 0, 00:19:23.376 "data_size": 65536 00:19:23.376 }, 00:19:23.376 { 00:19:23.376 "name": null, 00:19:23.376 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:23.376 "is_configured": false, 00:19:23.376 "data_offset": 0, 00:19:23.376 "data_size": 65536 00:19:23.376 }, 00:19:23.376 { 00:19:23.376 "name": "BaseBdev4", 00:19:23.376 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:23.376 "is_configured": true, 00:19:23.376 "data_offset": 0, 00:19:23.376 "data_size": 65536 00:19:23.376 } 00:19:23.376 ] 00:19:23.376 }' 00:19:23.376 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.376 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.634 [2024-09-27 22:36:19.425149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.634 "name": "Existed_Raid", 00:19:23.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.634 "strip_size_kb": 64, 00:19:23.634 "state": "configuring", 00:19:23.634 "raid_level": "raid5f", 00:19:23.634 "superblock": false, 00:19:23.634 "num_base_bdevs": 4, 00:19:23.634 "num_base_bdevs_discovered": 3, 00:19:23.634 "num_base_bdevs_operational": 4, 00:19:23.634 "base_bdevs_list": [ 00:19:23.634 { 00:19:23.634 "name": "BaseBdev1", 00:19:23.634 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:23.634 "is_configured": true, 00:19:23.634 "data_offset": 0, 00:19:23.634 "data_size": 65536 00:19:23.634 }, 00:19:23.634 { 00:19:23.634 "name": null, 00:19:23.634 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:23.634 "is_configured": false, 00:19:23.634 "data_offset": 0, 00:19:23.634 "data_size": 65536 00:19:23.634 }, 00:19:23.634 { 00:19:23.634 "name": "BaseBdev3", 00:19:23.634 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:23.634 "is_configured": true, 00:19:23.634 "data_offset": 0, 00:19:23.634 "data_size": 65536 00:19:23.634 }, 00:19:23.634 { 00:19:23.634 "name": "BaseBdev4", 00:19:23.634 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:23.634 "is_configured": true, 00:19:23.634 "data_offset": 0, 00:19:23.634 "data_size": 65536 00:19:23.634 } 00:19:23.634 ] 00:19:23.634 }' 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.634 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.201 [2024-09-27 22:36:19.901178] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.201 22:36:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.201 "name": "Existed_Raid", 00:19:24.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.201 "strip_size_kb": 64, 00:19:24.201 "state": "configuring", 00:19:24.201 "raid_level": "raid5f", 00:19:24.201 "superblock": false, 00:19:24.201 "num_base_bdevs": 4, 00:19:24.201 "num_base_bdevs_discovered": 2, 00:19:24.201 "num_base_bdevs_operational": 4, 00:19:24.201 "base_bdevs_list": [ 00:19:24.201 { 00:19:24.201 "name": null, 00:19:24.201 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:24.201 "is_configured": false, 00:19:24.201 "data_offset": 0, 00:19:24.201 "data_size": 65536 00:19:24.201 }, 00:19:24.201 { 00:19:24.201 "name": null, 00:19:24.201 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:24.201 "is_configured": false, 00:19:24.201 "data_offset": 0, 00:19:24.201 "data_size": 65536 00:19:24.201 }, 00:19:24.201 { 00:19:24.201 "name": "BaseBdev3", 00:19:24.201 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:24.201 "is_configured": true, 00:19:24.201 "data_offset": 0, 00:19:24.201 "data_size": 65536 00:19:24.201 }, 00:19:24.201 { 00:19:24.201 "name": "BaseBdev4", 00:19:24.201 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:24.201 "is_configured": true, 00:19:24.201 "data_offset": 0, 00:19:24.201 "data_size": 65536 00:19:24.201 } 00:19:24.201 ] 00:19:24.201 }' 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.201 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 [2024-09-27 22:36:20.428451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.768 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.768 "name": "Existed_Raid", 00:19:24.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.768 "strip_size_kb": 64, 00:19:24.768 "state": "configuring", 00:19:24.768 "raid_level": "raid5f", 00:19:24.768 "superblock": false, 00:19:24.768 "num_base_bdevs": 4, 00:19:24.768 "num_base_bdevs_discovered": 3, 00:19:24.768 "num_base_bdevs_operational": 4, 00:19:24.768 "base_bdevs_list": [ 00:19:24.768 { 00:19:24.768 "name": null, 00:19:24.768 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:24.768 "is_configured": false, 00:19:24.768 "data_offset": 0, 00:19:24.768 "data_size": 65536 00:19:24.768 }, 00:19:24.768 { 00:19:24.768 "name": "BaseBdev2", 00:19:24.768 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:24.768 "is_configured": true, 00:19:24.768 "data_offset": 0, 00:19:24.768 "data_size": 65536 00:19:24.768 }, 00:19:24.768 { 00:19:24.768 "name": "BaseBdev3", 00:19:24.768 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:24.768 "is_configured": true, 00:19:24.768 "data_offset": 0, 00:19:24.768 "data_size": 65536 00:19:24.768 }, 00:19:24.768 { 00:19:24.768 "name": "BaseBdev4", 00:19:24.768 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:24.768 "is_configured": true, 00:19:24.768 "data_offset": 0, 00:19:24.768 "data_size": 65536 00:19:24.768 } 00:19:24.768 ] 00:19:24.768 }' 00:19:24.769 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.769 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.027 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.027 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:25.027 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.286 22:36:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.286 [2024-09-27 22:36:21.025816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:25.286 [2024-09-27 22:36:21.025888] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:25.286 [2024-09-27 22:36:21.025897] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:25.286 [2024-09-27 22:36:21.026214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:25.286 [2024-09-27 22:36:21.034146] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:25.286 [2024-09-27 22:36:21.034179] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:25.286 [2024-09-27 22:36:21.034473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.286 NewBaseBdev 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.286 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.286 [ 00:19:25.286 { 00:19:25.286 "name": "NewBaseBdev", 00:19:25.286 "aliases": [ 00:19:25.286 "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa" 00:19:25.286 ], 00:19:25.286 "product_name": "Malloc disk", 00:19:25.286 "block_size": 512, 00:19:25.286 "num_blocks": 65536, 00:19:25.286 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:25.286 "assigned_rate_limits": { 00:19:25.286 "rw_ios_per_sec": 0, 00:19:25.286 "rw_mbytes_per_sec": 0, 00:19:25.286 "r_mbytes_per_sec": 0, 00:19:25.286 "w_mbytes_per_sec": 0 00:19:25.286 }, 00:19:25.286 "claimed": true, 00:19:25.286 "claim_type": "exclusive_write", 00:19:25.286 "zoned": false, 00:19:25.286 "supported_io_types": { 00:19:25.286 "read": true, 00:19:25.286 "write": true, 00:19:25.286 "unmap": true, 00:19:25.287 "flush": true, 00:19:25.287 "reset": true, 00:19:25.287 "nvme_admin": false, 00:19:25.287 "nvme_io": false, 00:19:25.287 "nvme_io_md": false, 00:19:25.287 "write_zeroes": true, 00:19:25.287 "zcopy": true, 00:19:25.287 "get_zone_info": false, 00:19:25.287 "zone_management": false, 00:19:25.287 "zone_append": false, 00:19:25.287 "compare": false, 00:19:25.287 "compare_and_write": false, 00:19:25.287 "abort": true, 00:19:25.287 "seek_hole": false, 00:19:25.287 "seek_data": false, 00:19:25.287 "copy": true, 00:19:25.287 "nvme_iov_md": false 00:19:25.287 }, 00:19:25.287 "memory_domains": [ 00:19:25.287 { 00:19:25.287 "dma_device_id": "system", 00:19:25.287 "dma_device_type": 1 00:19:25.287 }, 00:19:25.287 { 00:19:25.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.287 "dma_device_type": 2 00:19:25.287 } 00:19:25.287 ], 00:19:25.287 "driver_specific": {} 00:19:25.287 } 00:19:25.287 ] 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.287 "name": "Existed_Raid", 00:19:25.287 "uuid": "a398af3d-36f2-48b3-9dc8-671eb34bcb37", 00:19:25.287 "strip_size_kb": 64, 00:19:25.287 "state": "online", 00:19:25.287 "raid_level": "raid5f", 00:19:25.287 "superblock": false, 00:19:25.287 "num_base_bdevs": 4, 00:19:25.287 "num_base_bdevs_discovered": 4, 00:19:25.287 "num_base_bdevs_operational": 4, 00:19:25.287 "base_bdevs_list": [ 00:19:25.287 { 00:19:25.287 "name": "NewBaseBdev", 00:19:25.287 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:25.287 "is_configured": true, 00:19:25.287 "data_offset": 0, 00:19:25.287 "data_size": 65536 00:19:25.287 }, 00:19:25.287 { 00:19:25.287 "name": "BaseBdev2", 00:19:25.287 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:25.287 "is_configured": true, 00:19:25.287 "data_offset": 0, 00:19:25.287 "data_size": 65536 00:19:25.287 }, 00:19:25.287 { 00:19:25.287 "name": "BaseBdev3", 00:19:25.287 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:25.287 "is_configured": true, 00:19:25.287 "data_offset": 0, 00:19:25.287 "data_size": 65536 00:19:25.287 }, 00:19:25.287 { 00:19:25.287 "name": "BaseBdev4", 00:19:25.287 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:25.287 "is_configured": true, 00:19:25.287 "data_offset": 0, 00:19:25.287 "data_size": 65536 00:19:25.287 } 00:19:25.287 ] 00:19:25.287 }' 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.287 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.854 [2024-09-27 22:36:21.485844] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.854 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:25.854 "name": "Existed_Raid", 00:19:25.854 "aliases": [ 00:19:25.854 "a398af3d-36f2-48b3-9dc8-671eb34bcb37" 00:19:25.854 ], 00:19:25.854 "product_name": "Raid Volume", 00:19:25.854 "block_size": 512, 00:19:25.854 "num_blocks": 196608, 00:19:25.854 "uuid": "a398af3d-36f2-48b3-9dc8-671eb34bcb37", 00:19:25.854 "assigned_rate_limits": { 00:19:25.854 "rw_ios_per_sec": 0, 00:19:25.854 "rw_mbytes_per_sec": 0, 00:19:25.854 "r_mbytes_per_sec": 0, 00:19:25.854 "w_mbytes_per_sec": 0 00:19:25.854 }, 00:19:25.854 "claimed": false, 00:19:25.854 "zoned": false, 00:19:25.854 "supported_io_types": { 00:19:25.854 "read": true, 00:19:25.854 "write": true, 00:19:25.854 "unmap": false, 00:19:25.854 "flush": false, 00:19:25.854 "reset": true, 00:19:25.854 "nvme_admin": false, 00:19:25.854 "nvme_io": false, 00:19:25.854 "nvme_io_md": false, 00:19:25.854 "write_zeroes": true, 00:19:25.854 "zcopy": false, 00:19:25.854 "get_zone_info": false, 00:19:25.854 "zone_management": false, 00:19:25.854 "zone_append": false, 00:19:25.854 "compare": false, 00:19:25.854 "compare_and_write": false, 00:19:25.854 "abort": false, 00:19:25.854 "seek_hole": false, 00:19:25.854 "seek_data": false, 00:19:25.854 "copy": false, 00:19:25.854 "nvme_iov_md": false 00:19:25.854 }, 00:19:25.854 "driver_specific": { 00:19:25.854 "raid": { 00:19:25.854 "uuid": "a398af3d-36f2-48b3-9dc8-671eb34bcb37", 00:19:25.854 "strip_size_kb": 64, 00:19:25.854 "state": "online", 00:19:25.854 "raid_level": "raid5f", 00:19:25.854 "superblock": false, 00:19:25.854 "num_base_bdevs": 4, 00:19:25.854 "num_base_bdevs_discovered": 4, 00:19:25.854 "num_base_bdevs_operational": 4, 00:19:25.854 "base_bdevs_list": [ 00:19:25.854 { 00:19:25.854 "name": "NewBaseBdev", 00:19:25.854 "uuid": "60a3fc04-b9b0-47c4-b7d5-51cb3aff52aa", 00:19:25.854 "is_configured": true, 00:19:25.854 "data_offset": 0, 00:19:25.854 "data_size": 65536 00:19:25.854 }, 00:19:25.854 { 00:19:25.854 "name": "BaseBdev2", 00:19:25.854 "uuid": "e15caf2b-8b5c-47b9-b0d2-47564fd41d0d", 00:19:25.854 "is_configured": true, 00:19:25.854 "data_offset": 0, 00:19:25.854 "data_size": 65536 00:19:25.854 }, 00:19:25.854 { 00:19:25.854 "name": "BaseBdev3", 00:19:25.854 "uuid": "ff5d8392-4002-44a5-a7aa-392eaffa5bff", 00:19:25.854 "is_configured": true, 00:19:25.854 "data_offset": 0, 00:19:25.855 "data_size": 65536 00:19:25.855 }, 00:19:25.855 { 00:19:25.855 "name": "BaseBdev4", 00:19:25.855 "uuid": "d81b6c2b-d297-4c82-a006-d783b47f0d58", 00:19:25.855 "is_configured": true, 00:19:25.855 "data_offset": 0, 00:19:25.855 "data_size": 65536 00:19:25.855 } 00:19:25.855 ] 00:19:25.855 } 00:19:25.855 } 00:19:25.855 }' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:25.855 BaseBdev2 00:19:25.855 BaseBdev3 00:19:25.855 BaseBdev4' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.855 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.113 [2024-09-27 22:36:21.805171] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.113 [2024-09-27 22:36:21.805204] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:26.113 [2024-09-27 22:36:21.805298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.113 [2024-09-27 22:36:21.805643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.113 [2024-09-27 22:36:21.805656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83881 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83881 ']' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83881 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83881 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.113 killing process with pid 83881 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83881' 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83881 00:19:26.113 [2024-09-27 22:36:21.860544] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.113 22:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83881 00:19:26.679 [2024-09-27 22:36:22.260074] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.577 ************************************ 00:19:28.577 END TEST raid5f_state_function_test 00:19:28.577 ************************************ 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:28.577 00:19:28.577 real 0m12.402s 00:19:28.577 user 0m18.796s 00:19:28.577 sys 0m2.482s 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.577 22:36:24 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:28.577 22:36:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:28.577 22:36:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.577 22:36:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.577 ************************************ 00:19:28.577 START TEST raid5f_state_function_test_sb 00:19:28.577 ************************************ 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:28.577 Process raid pid: 84558 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84558 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84558' 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84558 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84558 ']' 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.577 22:36:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.577 [2024-09-27 22:36:24.405690] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:19:28.577 [2024-09-27 22:36:24.406028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.835 [2024-09-27 22:36:24.571142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.098 [2024-09-27 22:36:24.797821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.358 [2024-09-27 22:36:25.028964] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.358 [2024-09-27 22:36:25.029227] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.924 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.924 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:29.924 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:29.924 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.924 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.924 [2024-09-27 22:36:25.505867] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.924 [2024-09-27 22:36:25.505925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.924 [2024-09-27 22:36:25.505937] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.924 [2024-09-27 22:36:25.505950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.924 [2024-09-27 22:36:25.505958] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:29.925 [2024-09-27 22:36:25.505987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:29.925 [2024-09-27 22:36:25.505996] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:29.925 [2024-09-27 22:36:25.506008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.925 "name": "Existed_Raid", 00:19:29.925 "uuid": "2ff827b8-3a60-4c85-9ea6-7d66968bece8", 00:19:29.925 "strip_size_kb": 64, 00:19:29.925 "state": "configuring", 00:19:29.925 "raid_level": "raid5f", 00:19:29.925 "superblock": true, 00:19:29.925 "num_base_bdevs": 4, 00:19:29.925 "num_base_bdevs_discovered": 0, 00:19:29.925 "num_base_bdevs_operational": 4, 00:19:29.925 "base_bdevs_list": [ 00:19:29.925 { 00:19:29.925 "name": "BaseBdev1", 00:19:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.925 "is_configured": false, 00:19:29.925 "data_offset": 0, 00:19:29.925 "data_size": 0 00:19:29.925 }, 00:19:29.925 { 00:19:29.925 "name": "BaseBdev2", 00:19:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.925 "is_configured": false, 00:19:29.925 "data_offset": 0, 00:19:29.925 "data_size": 0 00:19:29.925 }, 00:19:29.925 { 00:19:29.925 "name": "BaseBdev3", 00:19:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.925 "is_configured": false, 00:19:29.925 "data_offset": 0, 00:19:29.925 "data_size": 0 00:19:29.925 }, 00:19:29.925 { 00:19:29.925 "name": "BaseBdev4", 00:19:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.925 "is_configured": false, 00:19:29.925 "data_offset": 0, 00:19:29.925 "data_size": 0 00:19:29.925 } 00:19:29.925 ] 00:19:29.925 }' 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.925 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.184 [2024-09-27 22:36:25.965129] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.184 [2024-09-27 22:36:25.965173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.184 [2024-09-27 22:36:25.977161] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.184 [2024-09-27 22:36:25.977207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.184 [2024-09-27 22:36:25.977218] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.184 [2024-09-27 22:36:25.977231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.184 [2024-09-27 22:36:25.977238] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.184 [2024-09-27 22:36:25.977250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.184 [2024-09-27 22:36:25.977258] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:30.184 [2024-09-27 22:36:25.977270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.184 22:36:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.184 [2024-09-27 22:36:26.028069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.184 BaseBdev1 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:30.184 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:30.185 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.185 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.185 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.185 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.185 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.185 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.185 [ 00:19:30.185 { 00:19:30.185 "name": "BaseBdev1", 00:19:30.185 "aliases": [ 00:19:30.185 "c7667872-b721-4c35-92fa-80394c2f4679" 00:19:30.185 ], 00:19:30.185 "product_name": "Malloc disk", 00:19:30.185 "block_size": 512, 00:19:30.185 "num_blocks": 65536, 00:19:30.185 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:30.185 "assigned_rate_limits": { 00:19:30.185 "rw_ios_per_sec": 0, 00:19:30.185 "rw_mbytes_per_sec": 0, 00:19:30.185 "r_mbytes_per_sec": 0, 00:19:30.185 "w_mbytes_per_sec": 0 00:19:30.185 }, 00:19:30.185 "claimed": true, 00:19:30.444 "claim_type": "exclusive_write", 00:19:30.444 "zoned": false, 00:19:30.444 "supported_io_types": { 00:19:30.444 "read": true, 00:19:30.444 "write": true, 00:19:30.444 "unmap": true, 00:19:30.444 "flush": true, 00:19:30.444 "reset": true, 00:19:30.444 "nvme_admin": false, 00:19:30.444 "nvme_io": false, 00:19:30.444 "nvme_io_md": false, 00:19:30.444 "write_zeroes": true, 00:19:30.444 "zcopy": true, 00:19:30.444 "get_zone_info": false, 00:19:30.444 "zone_management": false, 00:19:30.444 "zone_append": false, 00:19:30.444 "compare": false, 00:19:30.444 "compare_and_write": false, 00:19:30.444 "abort": true, 00:19:30.444 "seek_hole": false, 00:19:30.444 "seek_data": false, 00:19:30.444 "copy": true, 00:19:30.444 "nvme_iov_md": false 00:19:30.444 }, 00:19:30.444 "memory_domains": [ 00:19:30.444 { 00:19:30.444 "dma_device_id": "system", 00:19:30.444 "dma_device_type": 1 00:19:30.444 }, 00:19:30.444 { 00:19:30.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.444 "dma_device_type": 2 00:19:30.444 } 00:19:30.444 ], 00:19:30.444 "driver_specific": {} 00:19:30.444 } 00:19:30.444 ] 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.444 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.444 "name": "Existed_Raid", 00:19:30.444 "uuid": "12427070-bffb-42a1-a8e7-6651e8fbcbb2", 00:19:30.444 "strip_size_kb": 64, 00:19:30.444 "state": "configuring", 00:19:30.444 "raid_level": "raid5f", 00:19:30.444 "superblock": true, 00:19:30.444 "num_base_bdevs": 4, 00:19:30.444 "num_base_bdevs_discovered": 1, 00:19:30.444 "num_base_bdevs_operational": 4, 00:19:30.444 "base_bdevs_list": [ 00:19:30.444 { 00:19:30.444 "name": "BaseBdev1", 00:19:30.444 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:30.444 "is_configured": true, 00:19:30.444 "data_offset": 2048, 00:19:30.444 "data_size": 63488 00:19:30.444 }, 00:19:30.444 { 00:19:30.444 "name": "BaseBdev2", 00:19:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.444 "is_configured": false, 00:19:30.444 "data_offset": 0, 00:19:30.444 "data_size": 0 00:19:30.444 }, 00:19:30.445 { 00:19:30.445 "name": "BaseBdev3", 00:19:30.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.445 "is_configured": false, 00:19:30.445 "data_offset": 0, 00:19:30.445 "data_size": 0 00:19:30.445 }, 00:19:30.445 { 00:19:30.445 "name": "BaseBdev4", 00:19:30.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.445 "is_configured": false, 00:19:30.445 "data_offset": 0, 00:19:30.445 "data_size": 0 00:19:30.445 } 00:19:30.445 ] 00:19:30.445 }' 00:19:30.445 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.445 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.703 [2024-09-27 22:36:26.464109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.703 [2024-09-27 22:36:26.464166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.703 [2024-09-27 22:36:26.472180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.703 [2024-09-27 22:36:26.474397] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.703 [2024-09-27 22:36:26.474549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.703 [2024-09-27 22:36:26.474636] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.703 [2024-09-27 22:36:26.474682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.703 [2024-09-27 22:36:26.474758] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:30.703 [2024-09-27 22:36:26.474799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.703 "name": "Existed_Raid", 00:19:30.703 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:30.703 "strip_size_kb": 64, 00:19:30.703 "state": "configuring", 00:19:30.703 "raid_level": "raid5f", 00:19:30.703 "superblock": true, 00:19:30.703 "num_base_bdevs": 4, 00:19:30.703 "num_base_bdevs_discovered": 1, 00:19:30.703 "num_base_bdevs_operational": 4, 00:19:30.703 "base_bdevs_list": [ 00:19:30.703 { 00:19:30.703 "name": "BaseBdev1", 00:19:30.703 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:30.703 "is_configured": true, 00:19:30.703 "data_offset": 2048, 00:19:30.703 "data_size": 63488 00:19:30.703 }, 00:19:30.703 { 00:19:30.703 "name": "BaseBdev2", 00:19:30.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.703 "is_configured": false, 00:19:30.703 "data_offset": 0, 00:19:30.703 "data_size": 0 00:19:30.703 }, 00:19:30.703 { 00:19:30.703 "name": "BaseBdev3", 00:19:30.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.703 "is_configured": false, 00:19:30.703 "data_offset": 0, 00:19:30.703 "data_size": 0 00:19:30.703 }, 00:19:30.703 { 00:19:30.703 "name": "BaseBdev4", 00:19:30.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.703 "is_configured": false, 00:19:30.703 "data_offset": 0, 00:19:30.703 "data_size": 0 00:19:30.703 } 00:19:30.703 ] 00:19:30.703 }' 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.703 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 [2024-09-27 22:36:26.944632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.270 BaseBdev2 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 [ 00:19:31.270 { 00:19:31.270 "name": "BaseBdev2", 00:19:31.270 "aliases": [ 00:19:31.270 "f0395e53-762c-4919-9abb-39536af00b67" 00:19:31.270 ], 00:19:31.270 "product_name": "Malloc disk", 00:19:31.270 "block_size": 512, 00:19:31.270 "num_blocks": 65536, 00:19:31.270 "uuid": "f0395e53-762c-4919-9abb-39536af00b67", 00:19:31.270 "assigned_rate_limits": { 00:19:31.270 "rw_ios_per_sec": 0, 00:19:31.270 "rw_mbytes_per_sec": 0, 00:19:31.270 "r_mbytes_per_sec": 0, 00:19:31.270 "w_mbytes_per_sec": 0 00:19:31.270 }, 00:19:31.270 "claimed": true, 00:19:31.270 "claim_type": "exclusive_write", 00:19:31.270 "zoned": false, 00:19:31.270 "supported_io_types": { 00:19:31.270 "read": true, 00:19:31.270 "write": true, 00:19:31.270 "unmap": true, 00:19:31.270 "flush": true, 00:19:31.270 "reset": true, 00:19:31.270 "nvme_admin": false, 00:19:31.270 "nvme_io": false, 00:19:31.270 "nvme_io_md": false, 00:19:31.270 "write_zeroes": true, 00:19:31.270 "zcopy": true, 00:19:31.270 "get_zone_info": false, 00:19:31.270 "zone_management": false, 00:19:31.270 "zone_append": false, 00:19:31.270 "compare": false, 00:19:31.270 "compare_and_write": false, 00:19:31.270 "abort": true, 00:19:31.270 "seek_hole": false, 00:19:31.270 "seek_data": false, 00:19:31.270 "copy": true, 00:19:31.270 "nvme_iov_md": false 00:19:31.270 }, 00:19:31.270 "memory_domains": [ 00:19:31.270 { 00:19:31.270 "dma_device_id": "system", 00:19:31.270 "dma_device_type": 1 00:19:31.270 }, 00:19:31.270 { 00:19:31.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.270 "dma_device_type": 2 00:19:31.270 } 00:19:31.270 ], 00:19:31.270 "driver_specific": {} 00:19:31.270 } 00:19:31.270 ] 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.270 22:36:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.270 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.270 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.270 "name": "Existed_Raid", 00:19:31.270 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:31.270 "strip_size_kb": 64, 00:19:31.271 "state": "configuring", 00:19:31.271 "raid_level": "raid5f", 00:19:31.271 "superblock": true, 00:19:31.271 "num_base_bdevs": 4, 00:19:31.271 "num_base_bdevs_discovered": 2, 00:19:31.271 "num_base_bdevs_operational": 4, 00:19:31.271 "base_bdevs_list": [ 00:19:31.271 { 00:19:31.271 "name": "BaseBdev1", 00:19:31.271 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:31.271 "is_configured": true, 00:19:31.271 "data_offset": 2048, 00:19:31.271 "data_size": 63488 00:19:31.271 }, 00:19:31.271 { 00:19:31.271 "name": "BaseBdev2", 00:19:31.271 "uuid": "f0395e53-762c-4919-9abb-39536af00b67", 00:19:31.271 "is_configured": true, 00:19:31.271 "data_offset": 2048, 00:19:31.271 "data_size": 63488 00:19:31.271 }, 00:19:31.271 { 00:19:31.271 "name": "BaseBdev3", 00:19:31.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.271 "is_configured": false, 00:19:31.271 "data_offset": 0, 00:19:31.271 "data_size": 0 00:19:31.271 }, 00:19:31.271 { 00:19:31.271 "name": "BaseBdev4", 00:19:31.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.271 "is_configured": false, 00:19:31.271 "data_offset": 0, 00:19:31.271 "data_size": 0 00:19:31.271 } 00:19:31.271 ] 00:19:31.271 }' 00:19:31.271 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.271 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 [2024-09-27 22:36:27.472658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.839 BaseBdev3 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 [ 00:19:31.839 { 00:19:31.839 "name": "BaseBdev3", 00:19:31.839 "aliases": [ 00:19:31.839 "de3340eb-cb5b-41d2-a79b-e71f89fc325d" 00:19:31.839 ], 00:19:31.839 "product_name": "Malloc disk", 00:19:31.839 "block_size": 512, 00:19:31.839 "num_blocks": 65536, 00:19:31.839 "uuid": "de3340eb-cb5b-41d2-a79b-e71f89fc325d", 00:19:31.839 "assigned_rate_limits": { 00:19:31.839 "rw_ios_per_sec": 0, 00:19:31.839 "rw_mbytes_per_sec": 0, 00:19:31.839 "r_mbytes_per_sec": 0, 00:19:31.839 "w_mbytes_per_sec": 0 00:19:31.839 }, 00:19:31.839 "claimed": true, 00:19:31.839 "claim_type": "exclusive_write", 00:19:31.839 "zoned": false, 00:19:31.839 "supported_io_types": { 00:19:31.839 "read": true, 00:19:31.839 "write": true, 00:19:31.839 "unmap": true, 00:19:31.839 "flush": true, 00:19:31.839 "reset": true, 00:19:31.839 "nvme_admin": false, 00:19:31.839 "nvme_io": false, 00:19:31.839 "nvme_io_md": false, 00:19:31.839 "write_zeroes": true, 00:19:31.839 "zcopy": true, 00:19:31.839 "get_zone_info": false, 00:19:31.839 "zone_management": false, 00:19:31.839 "zone_append": false, 00:19:31.839 "compare": false, 00:19:31.839 "compare_and_write": false, 00:19:31.839 "abort": true, 00:19:31.839 "seek_hole": false, 00:19:31.839 "seek_data": false, 00:19:31.839 "copy": true, 00:19:31.839 "nvme_iov_md": false 00:19:31.839 }, 00:19:31.839 "memory_domains": [ 00:19:31.839 { 00:19:31.839 "dma_device_id": "system", 00:19:31.839 "dma_device_type": 1 00:19:31.839 }, 00:19:31.839 { 00:19:31.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.839 "dma_device_type": 2 00:19:31.839 } 00:19:31.839 ], 00:19:31.839 "driver_specific": {} 00:19:31.839 } 00:19:31.839 ] 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.839 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.839 "name": "Existed_Raid", 00:19:31.839 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:31.839 "strip_size_kb": 64, 00:19:31.839 "state": "configuring", 00:19:31.839 "raid_level": "raid5f", 00:19:31.839 "superblock": true, 00:19:31.839 "num_base_bdevs": 4, 00:19:31.839 "num_base_bdevs_discovered": 3, 00:19:31.839 "num_base_bdevs_operational": 4, 00:19:31.839 "base_bdevs_list": [ 00:19:31.839 { 00:19:31.839 "name": "BaseBdev1", 00:19:31.839 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:31.839 "is_configured": true, 00:19:31.839 "data_offset": 2048, 00:19:31.839 "data_size": 63488 00:19:31.839 }, 00:19:31.839 { 00:19:31.839 "name": "BaseBdev2", 00:19:31.839 "uuid": "f0395e53-762c-4919-9abb-39536af00b67", 00:19:31.839 "is_configured": true, 00:19:31.839 "data_offset": 2048, 00:19:31.839 "data_size": 63488 00:19:31.839 }, 00:19:31.839 { 00:19:31.839 "name": "BaseBdev3", 00:19:31.839 "uuid": "de3340eb-cb5b-41d2-a79b-e71f89fc325d", 00:19:31.840 "is_configured": true, 00:19:31.840 "data_offset": 2048, 00:19:31.840 "data_size": 63488 00:19:31.840 }, 00:19:31.840 { 00:19:31.840 "name": "BaseBdev4", 00:19:31.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.840 "is_configured": false, 00:19:31.840 "data_offset": 0, 00:19:31.840 "data_size": 0 00:19:31.840 } 00:19:31.840 ] 00:19:31.840 }' 00:19:31.840 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.840 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.097 [2024-09-27 22:36:27.965163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:32.097 [2024-09-27 22:36:27.965443] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:32.097 [2024-09-27 22:36:27.965462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:32.097 [2024-09-27 22:36:27.965734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:32.097 BaseBdev4 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.097 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.097 [2024-09-27 22:36:27.973985] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:32.097 [2024-09-27 22:36:27.974128] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:32.097 [2024-09-27 22:36:27.974508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.356 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.356 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:32.356 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.356 22:36:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.356 [ 00:19:32.356 { 00:19:32.356 "name": "BaseBdev4", 00:19:32.356 "aliases": [ 00:19:32.356 "437096c4-3c1d-42ca-98f7-46e0ece88ba8" 00:19:32.356 ], 00:19:32.356 "product_name": "Malloc disk", 00:19:32.356 "block_size": 512, 00:19:32.356 "num_blocks": 65536, 00:19:32.356 "uuid": "437096c4-3c1d-42ca-98f7-46e0ece88ba8", 00:19:32.356 "assigned_rate_limits": { 00:19:32.356 "rw_ios_per_sec": 0, 00:19:32.356 "rw_mbytes_per_sec": 0, 00:19:32.356 "r_mbytes_per_sec": 0, 00:19:32.356 "w_mbytes_per_sec": 0 00:19:32.356 }, 00:19:32.356 "claimed": true, 00:19:32.356 "claim_type": "exclusive_write", 00:19:32.356 "zoned": false, 00:19:32.356 "supported_io_types": { 00:19:32.356 "read": true, 00:19:32.356 "write": true, 00:19:32.356 "unmap": true, 00:19:32.356 "flush": true, 00:19:32.356 "reset": true, 00:19:32.356 "nvme_admin": false, 00:19:32.356 "nvme_io": false, 00:19:32.356 "nvme_io_md": false, 00:19:32.356 "write_zeroes": true, 00:19:32.356 "zcopy": true, 00:19:32.356 "get_zone_info": false, 00:19:32.356 "zone_management": false, 00:19:32.356 "zone_append": false, 00:19:32.356 "compare": false, 00:19:32.356 "compare_and_write": false, 00:19:32.356 "abort": true, 00:19:32.356 "seek_hole": false, 00:19:32.356 "seek_data": false, 00:19:32.356 "copy": true, 00:19:32.356 "nvme_iov_md": false 00:19:32.356 }, 00:19:32.356 "memory_domains": [ 00:19:32.356 { 00:19:32.356 "dma_device_id": "system", 00:19:32.356 "dma_device_type": 1 00:19:32.356 }, 00:19:32.356 { 00:19:32.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.356 "dma_device_type": 2 00:19:32.356 } 00:19:32.356 ], 00:19:32.356 "driver_specific": {} 00:19:32.356 } 00:19:32.356 ] 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.356 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.356 "name": "Existed_Raid", 00:19:32.356 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:32.356 "strip_size_kb": 64, 00:19:32.356 "state": "online", 00:19:32.356 "raid_level": "raid5f", 00:19:32.356 "superblock": true, 00:19:32.356 "num_base_bdevs": 4, 00:19:32.356 "num_base_bdevs_discovered": 4, 00:19:32.356 "num_base_bdevs_operational": 4, 00:19:32.356 "base_bdevs_list": [ 00:19:32.356 { 00:19:32.356 "name": "BaseBdev1", 00:19:32.356 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:32.356 "is_configured": true, 00:19:32.356 "data_offset": 2048, 00:19:32.356 "data_size": 63488 00:19:32.356 }, 00:19:32.356 { 00:19:32.356 "name": "BaseBdev2", 00:19:32.356 "uuid": "f0395e53-762c-4919-9abb-39536af00b67", 00:19:32.356 "is_configured": true, 00:19:32.356 "data_offset": 2048, 00:19:32.356 "data_size": 63488 00:19:32.356 }, 00:19:32.356 { 00:19:32.356 "name": "BaseBdev3", 00:19:32.357 "uuid": "de3340eb-cb5b-41d2-a79b-e71f89fc325d", 00:19:32.357 "is_configured": true, 00:19:32.357 "data_offset": 2048, 00:19:32.357 "data_size": 63488 00:19:32.357 }, 00:19:32.357 { 00:19:32.357 "name": "BaseBdev4", 00:19:32.357 "uuid": "437096c4-3c1d-42ca-98f7-46e0ece88ba8", 00:19:32.357 "is_configured": true, 00:19:32.357 "data_offset": 2048, 00:19:32.357 "data_size": 63488 00:19:32.357 } 00:19:32.357 ] 00:19:32.357 }' 00:19:32.357 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.357 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.614 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.615 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.615 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.615 [2024-09-27 22:36:28.445563] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.615 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.615 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.615 "name": "Existed_Raid", 00:19:32.615 "aliases": [ 00:19:32.615 "bda746c8-23cb-45e7-b801-39d28de1fc90" 00:19:32.615 ], 00:19:32.615 "product_name": "Raid Volume", 00:19:32.615 "block_size": 512, 00:19:32.615 "num_blocks": 190464, 00:19:32.615 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:32.615 "assigned_rate_limits": { 00:19:32.615 "rw_ios_per_sec": 0, 00:19:32.615 "rw_mbytes_per_sec": 0, 00:19:32.615 "r_mbytes_per_sec": 0, 00:19:32.615 "w_mbytes_per_sec": 0 00:19:32.615 }, 00:19:32.615 "claimed": false, 00:19:32.615 "zoned": false, 00:19:32.615 "supported_io_types": { 00:19:32.615 "read": true, 00:19:32.615 "write": true, 00:19:32.615 "unmap": false, 00:19:32.615 "flush": false, 00:19:32.615 "reset": true, 00:19:32.615 "nvme_admin": false, 00:19:32.615 "nvme_io": false, 00:19:32.615 "nvme_io_md": false, 00:19:32.615 "write_zeroes": true, 00:19:32.615 "zcopy": false, 00:19:32.615 "get_zone_info": false, 00:19:32.615 "zone_management": false, 00:19:32.615 "zone_append": false, 00:19:32.615 "compare": false, 00:19:32.615 "compare_and_write": false, 00:19:32.615 "abort": false, 00:19:32.615 "seek_hole": false, 00:19:32.615 "seek_data": false, 00:19:32.615 "copy": false, 00:19:32.615 "nvme_iov_md": false 00:19:32.615 }, 00:19:32.615 "driver_specific": { 00:19:32.615 "raid": { 00:19:32.615 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:32.615 "strip_size_kb": 64, 00:19:32.615 "state": "online", 00:19:32.615 "raid_level": "raid5f", 00:19:32.615 "superblock": true, 00:19:32.615 "num_base_bdevs": 4, 00:19:32.615 "num_base_bdevs_discovered": 4, 00:19:32.615 "num_base_bdevs_operational": 4, 00:19:32.615 "base_bdevs_list": [ 00:19:32.615 { 00:19:32.615 "name": "BaseBdev1", 00:19:32.615 "uuid": "c7667872-b721-4c35-92fa-80394c2f4679", 00:19:32.615 "is_configured": true, 00:19:32.615 "data_offset": 2048, 00:19:32.615 "data_size": 63488 00:19:32.615 }, 00:19:32.615 { 00:19:32.615 "name": "BaseBdev2", 00:19:32.615 "uuid": "f0395e53-762c-4919-9abb-39536af00b67", 00:19:32.615 "is_configured": true, 00:19:32.615 "data_offset": 2048, 00:19:32.615 "data_size": 63488 00:19:32.615 }, 00:19:32.615 { 00:19:32.615 "name": "BaseBdev3", 00:19:32.615 "uuid": "de3340eb-cb5b-41d2-a79b-e71f89fc325d", 00:19:32.615 "is_configured": true, 00:19:32.615 "data_offset": 2048, 00:19:32.615 "data_size": 63488 00:19:32.615 }, 00:19:32.615 { 00:19:32.615 "name": "BaseBdev4", 00:19:32.615 "uuid": "437096c4-3c1d-42ca-98f7-46e0ece88ba8", 00:19:32.615 "is_configured": true, 00:19:32.615 "data_offset": 2048, 00:19:32.615 "data_size": 63488 00:19:32.615 } 00:19:32.615 ] 00:19:32.615 } 00:19:32.615 } 00:19:32.615 }' 00:19:32.615 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.873 BaseBdev2 00:19:32.873 BaseBdev3 00:19:32.873 BaseBdev4' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.873 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 [2024-09-27 22:36:28.749161] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.132 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.132 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:33.132 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:33.132 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:33.132 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.133 "name": "Existed_Raid", 00:19:33.133 "uuid": "bda746c8-23cb-45e7-b801-39d28de1fc90", 00:19:33.133 "strip_size_kb": 64, 00:19:33.133 "state": "online", 00:19:33.133 "raid_level": "raid5f", 00:19:33.133 "superblock": true, 00:19:33.133 "num_base_bdevs": 4, 00:19:33.133 "num_base_bdevs_discovered": 3, 00:19:33.133 "num_base_bdevs_operational": 3, 00:19:33.133 "base_bdevs_list": [ 00:19:33.133 { 00:19:33.133 "name": null, 00:19:33.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.133 "is_configured": false, 00:19:33.133 "data_offset": 0, 00:19:33.133 "data_size": 63488 00:19:33.133 }, 00:19:33.133 { 00:19:33.133 "name": "BaseBdev2", 00:19:33.133 "uuid": "f0395e53-762c-4919-9abb-39536af00b67", 00:19:33.133 "is_configured": true, 00:19:33.133 "data_offset": 2048, 00:19:33.133 "data_size": 63488 00:19:33.133 }, 00:19:33.133 { 00:19:33.133 "name": "BaseBdev3", 00:19:33.133 "uuid": "de3340eb-cb5b-41d2-a79b-e71f89fc325d", 00:19:33.133 "is_configured": true, 00:19:33.133 "data_offset": 2048, 00:19:33.133 "data_size": 63488 00:19:33.133 }, 00:19:33.133 { 00:19:33.133 "name": "BaseBdev4", 00:19:33.133 "uuid": "437096c4-3c1d-42ca-98f7-46e0ece88ba8", 00:19:33.133 "is_configured": true, 00:19:33.133 "data_offset": 2048, 00:19:33.133 "data_size": 63488 00:19:33.133 } 00:19:33.133 ] 00:19:33.133 }' 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.133 22:36:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.699 [2024-09-27 22:36:29.342159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.699 [2024-09-27 22:36:29.342315] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.699 [2024-09-27 22:36:29.436765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.699 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.699 [2024-09-27 22:36:29.492719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.958 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.959 [2024-09-27 22:36:29.643244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:33.959 [2024-09-27 22:36:29.643315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.959 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.218 BaseBdev2 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.218 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.218 [ 00:19:34.218 { 00:19:34.218 "name": "BaseBdev2", 00:19:34.218 "aliases": [ 00:19:34.218 "f134b5ae-b614-44fd-9025-1cdec8892caa" 00:19:34.218 ], 00:19:34.218 "product_name": "Malloc disk", 00:19:34.218 "block_size": 512, 00:19:34.218 "num_blocks": 65536, 00:19:34.218 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:34.218 "assigned_rate_limits": { 00:19:34.218 "rw_ios_per_sec": 0, 00:19:34.218 "rw_mbytes_per_sec": 0, 00:19:34.219 "r_mbytes_per_sec": 0, 00:19:34.219 "w_mbytes_per_sec": 0 00:19:34.219 }, 00:19:34.219 "claimed": false, 00:19:34.219 "zoned": false, 00:19:34.219 "supported_io_types": { 00:19:34.219 "read": true, 00:19:34.219 "write": true, 00:19:34.219 "unmap": true, 00:19:34.219 "flush": true, 00:19:34.219 "reset": true, 00:19:34.219 "nvme_admin": false, 00:19:34.219 "nvme_io": false, 00:19:34.219 "nvme_io_md": false, 00:19:34.219 "write_zeroes": true, 00:19:34.219 "zcopy": true, 00:19:34.219 "get_zone_info": false, 00:19:34.219 "zone_management": false, 00:19:34.219 "zone_append": false, 00:19:34.219 "compare": false, 00:19:34.219 "compare_and_write": false, 00:19:34.219 "abort": true, 00:19:34.219 "seek_hole": false, 00:19:34.219 "seek_data": false, 00:19:34.219 "copy": true, 00:19:34.219 "nvme_iov_md": false 00:19:34.219 }, 00:19:34.219 "memory_domains": [ 00:19:34.219 { 00:19:34.219 "dma_device_id": "system", 00:19:34.219 "dma_device_type": 1 00:19:34.219 }, 00:19:34.219 { 00:19:34.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.219 "dma_device_type": 2 00:19:34.219 } 00:19:34.219 ], 00:19:34.219 "driver_specific": {} 00:19:34.219 } 00:19:34.219 ] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 BaseBdev3 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 [ 00:19:34.219 { 00:19:34.219 "name": "BaseBdev3", 00:19:34.219 "aliases": [ 00:19:34.219 "b7c1b15a-8118-47ac-835c-0b856a9efea1" 00:19:34.219 ], 00:19:34.219 "product_name": "Malloc disk", 00:19:34.219 "block_size": 512, 00:19:34.219 "num_blocks": 65536, 00:19:34.219 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:34.219 "assigned_rate_limits": { 00:19:34.219 "rw_ios_per_sec": 0, 00:19:34.219 "rw_mbytes_per_sec": 0, 00:19:34.219 "r_mbytes_per_sec": 0, 00:19:34.219 "w_mbytes_per_sec": 0 00:19:34.219 }, 00:19:34.219 "claimed": false, 00:19:34.219 "zoned": false, 00:19:34.219 "supported_io_types": { 00:19:34.219 "read": true, 00:19:34.219 "write": true, 00:19:34.219 "unmap": true, 00:19:34.219 "flush": true, 00:19:34.219 "reset": true, 00:19:34.219 "nvme_admin": false, 00:19:34.219 "nvme_io": false, 00:19:34.219 "nvme_io_md": false, 00:19:34.219 "write_zeroes": true, 00:19:34.219 "zcopy": true, 00:19:34.219 "get_zone_info": false, 00:19:34.219 "zone_management": false, 00:19:34.219 "zone_append": false, 00:19:34.219 "compare": false, 00:19:34.219 "compare_and_write": false, 00:19:34.219 "abort": true, 00:19:34.219 "seek_hole": false, 00:19:34.219 "seek_data": false, 00:19:34.219 "copy": true, 00:19:34.219 "nvme_iov_md": false 00:19:34.219 }, 00:19:34.219 "memory_domains": [ 00:19:34.219 { 00:19:34.219 "dma_device_id": "system", 00:19:34.219 "dma_device_type": 1 00:19:34.219 }, 00:19:34.219 { 00:19:34.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.219 "dma_device_type": 2 00:19:34.219 } 00:19:34.219 ], 00:19:34.219 "driver_specific": {} 00:19:34.219 } 00:19:34.219 ] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 BaseBdev4 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 [ 00:19:34.219 { 00:19:34.219 "name": "BaseBdev4", 00:19:34.219 "aliases": [ 00:19:34.219 "f8ede215-3134-42ba-8ef2-98ccaea2d47f" 00:19:34.219 ], 00:19:34.219 "product_name": "Malloc disk", 00:19:34.219 "block_size": 512, 00:19:34.219 "num_blocks": 65536, 00:19:34.219 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:34.219 "assigned_rate_limits": { 00:19:34.219 "rw_ios_per_sec": 0, 00:19:34.219 "rw_mbytes_per_sec": 0, 00:19:34.219 "r_mbytes_per_sec": 0, 00:19:34.219 "w_mbytes_per_sec": 0 00:19:34.219 }, 00:19:34.219 "claimed": false, 00:19:34.219 "zoned": false, 00:19:34.219 "supported_io_types": { 00:19:34.219 "read": true, 00:19:34.219 "write": true, 00:19:34.219 "unmap": true, 00:19:34.219 "flush": true, 00:19:34.219 "reset": true, 00:19:34.219 "nvme_admin": false, 00:19:34.219 "nvme_io": false, 00:19:34.219 "nvme_io_md": false, 00:19:34.219 "write_zeroes": true, 00:19:34.219 "zcopy": true, 00:19:34.219 "get_zone_info": false, 00:19:34.219 "zone_management": false, 00:19:34.219 "zone_append": false, 00:19:34.219 "compare": false, 00:19:34.219 "compare_and_write": false, 00:19:34.219 "abort": true, 00:19:34.219 "seek_hole": false, 00:19:34.219 "seek_data": false, 00:19:34.219 "copy": true, 00:19:34.219 "nvme_iov_md": false 00:19:34.219 }, 00:19:34.219 "memory_domains": [ 00:19:34.219 { 00:19:34.219 "dma_device_id": "system", 00:19:34.219 "dma_device_type": 1 00:19:34.219 }, 00:19:34.219 { 00:19:34.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.219 "dma_device_type": 2 00:19:34.219 } 00:19:34.219 ], 00:19:34.219 "driver_specific": {} 00:19:34.219 } 00:19:34.219 ] 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.219 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.219 [2024-09-27 22:36:30.083179] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:34.219 [2024-09-27 22:36:30.083347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:34.219 [2024-09-27 22:36:30.083389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.219 [2024-09-27 22:36:30.085583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:34.219 [2024-09-27 22:36:30.085638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.220 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.478 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.478 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.478 "name": "Existed_Raid", 00:19:34.478 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:34.478 "strip_size_kb": 64, 00:19:34.478 "state": "configuring", 00:19:34.478 "raid_level": "raid5f", 00:19:34.478 "superblock": true, 00:19:34.478 "num_base_bdevs": 4, 00:19:34.478 "num_base_bdevs_discovered": 3, 00:19:34.478 "num_base_bdevs_operational": 4, 00:19:34.478 "base_bdevs_list": [ 00:19:34.478 { 00:19:34.478 "name": "BaseBdev1", 00:19:34.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.478 "is_configured": false, 00:19:34.478 "data_offset": 0, 00:19:34.478 "data_size": 0 00:19:34.478 }, 00:19:34.478 { 00:19:34.478 "name": "BaseBdev2", 00:19:34.478 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:34.478 "is_configured": true, 00:19:34.478 "data_offset": 2048, 00:19:34.478 "data_size": 63488 00:19:34.478 }, 00:19:34.478 { 00:19:34.478 "name": "BaseBdev3", 00:19:34.478 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:34.478 "is_configured": true, 00:19:34.478 "data_offset": 2048, 00:19:34.478 "data_size": 63488 00:19:34.478 }, 00:19:34.478 { 00:19:34.478 "name": "BaseBdev4", 00:19:34.478 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:34.478 "is_configured": true, 00:19:34.478 "data_offset": 2048, 00:19:34.478 "data_size": 63488 00:19:34.478 } 00:19:34.478 ] 00:19:34.478 }' 00:19:34.478 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.478 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.736 [2024-09-27 22:36:30.514767] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.736 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.737 "name": "Existed_Raid", 00:19:34.737 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:34.737 "strip_size_kb": 64, 00:19:34.737 "state": "configuring", 00:19:34.737 "raid_level": "raid5f", 00:19:34.737 "superblock": true, 00:19:34.737 "num_base_bdevs": 4, 00:19:34.737 "num_base_bdevs_discovered": 2, 00:19:34.737 "num_base_bdevs_operational": 4, 00:19:34.737 "base_bdevs_list": [ 00:19:34.737 { 00:19:34.737 "name": "BaseBdev1", 00:19:34.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.737 "is_configured": false, 00:19:34.737 "data_offset": 0, 00:19:34.737 "data_size": 0 00:19:34.737 }, 00:19:34.737 { 00:19:34.737 "name": null, 00:19:34.737 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:34.737 "is_configured": false, 00:19:34.737 "data_offset": 0, 00:19:34.737 "data_size": 63488 00:19:34.737 }, 00:19:34.737 { 00:19:34.737 "name": "BaseBdev3", 00:19:34.737 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:34.737 "is_configured": true, 00:19:34.737 "data_offset": 2048, 00:19:34.737 "data_size": 63488 00:19:34.737 }, 00:19:34.737 { 00:19:34.737 "name": "BaseBdev4", 00:19:34.737 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:34.737 "is_configured": true, 00:19:34.737 "data_offset": 2048, 00:19:34.737 "data_size": 63488 00:19:34.737 } 00:19:34.737 ] 00:19:34.737 }' 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.737 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.303 [2024-09-27 22:36:30.984797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.303 BaseBdev1 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.303 22:36:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.303 [ 00:19:35.303 { 00:19:35.303 "name": "BaseBdev1", 00:19:35.303 "aliases": [ 00:19:35.303 "32385b5e-f854-48a7-8c54-a70c1596b970" 00:19:35.303 ], 00:19:35.303 "product_name": "Malloc disk", 00:19:35.303 "block_size": 512, 00:19:35.303 "num_blocks": 65536, 00:19:35.303 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:35.303 "assigned_rate_limits": { 00:19:35.303 "rw_ios_per_sec": 0, 00:19:35.303 "rw_mbytes_per_sec": 0, 00:19:35.303 "r_mbytes_per_sec": 0, 00:19:35.303 "w_mbytes_per_sec": 0 00:19:35.303 }, 00:19:35.303 "claimed": true, 00:19:35.303 "claim_type": "exclusive_write", 00:19:35.303 "zoned": false, 00:19:35.303 "supported_io_types": { 00:19:35.303 "read": true, 00:19:35.303 "write": true, 00:19:35.303 "unmap": true, 00:19:35.303 "flush": true, 00:19:35.303 "reset": true, 00:19:35.303 "nvme_admin": false, 00:19:35.303 "nvme_io": false, 00:19:35.303 "nvme_io_md": false, 00:19:35.303 "write_zeroes": true, 00:19:35.303 "zcopy": true, 00:19:35.303 "get_zone_info": false, 00:19:35.303 "zone_management": false, 00:19:35.303 "zone_append": false, 00:19:35.303 "compare": false, 00:19:35.303 "compare_and_write": false, 00:19:35.303 "abort": true, 00:19:35.303 "seek_hole": false, 00:19:35.303 "seek_data": false, 00:19:35.303 "copy": true, 00:19:35.303 "nvme_iov_md": false 00:19:35.303 }, 00:19:35.303 "memory_domains": [ 00:19:35.303 { 00:19:35.303 "dma_device_id": "system", 00:19:35.303 "dma_device_type": 1 00:19:35.303 }, 00:19:35.303 { 00:19:35.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.303 "dma_device_type": 2 00:19:35.303 } 00:19:35.303 ], 00:19:35.303 "driver_specific": {} 00:19:35.303 } 00:19:35.303 ] 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.303 "name": "Existed_Raid", 00:19:35.303 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:35.303 "strip_size_kb": 64, 00:19:35.303 "state": "configuring", 00:19:35.303 "raid_level": "raid5f", 00:19:35.303 "superblock": true, 00:19:35.303 "num_base_bdevs": 4, 00:19:35.303 "num_base_bdevs_discovered": 3, 00:19:35.303 "num_base_bdevs_operational": 4, 00:19:35.303 "base_bdevs_list": [ 00:19:35.303 { 00:19:35.303 "name": "BaseBdev1", 00:19:35.303 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:35.303 "is_configured": true, 00:19:35.303 "data_offset": 2048, 00:19:35.303 "data_size": 63488 00:19:35.303 }, 00:19:35.303 { 00:19:35.303 "name": null, 00:19:35.303 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:35.303 "is_configured": false, 00:19:35.303 "data_offset": 0, 00:19:35.303 "data_size": 63488 00:19:35.303 }, 00:19:35.303 { 00:19:35.303 "name": "BaseBdev3", 00:19:35.303 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:35.303 "is_configured": true, 00:19:35.303 "data_offset": 2048, 00:19:35.303 "data_size": 63488 00:19:35.303 }, 00:19:35.303 { 00:19:35.303 "name": "BaseBdev4", 00:19:35.303 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:35.303 "is_configured": true, 00:19:35.303 "data_offset": 2048, 00:19:35.303 "data_size": 63488 00:19:35.303 } 00:19:35.303 ] 00:19:35.303 }' 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.303 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.869 [2024-09-27 22:36:31.500166] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.869 "name": "Existed_Raid", 00:19:35.869 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:35.869 "strip_size_kb": 64, 00:19:35.869 "state": "configuring", 00:19:35.869 "raid_level": "raid5f", 00:19:35.869 "superblock": true, 00:19:35.869 "num_base_bdevs": 4, 00:19:35.869 "num_base_bdevs_discovered": 2, 00:19:35.869 "num_base_bdevs_operational": 4, 00:19:35.869 "base_bdevs_list": [ 00:19:35.869 { 00:19:35.869 "name": "BaseBdev1", 00:19:35.869 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:35.869 "is_configured": true, 00:19:35.869 "data_offset": 2048, 00:19:35.869 "data_size": 63488 00:19:35.869 }, 00:19:35.869 { 00:19:35.869 "name": null, 00:19:35.869 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:35.869 "is_configured": false, 00:19:35.869 "data_offset": 0, 00:19:35.869 "data_size": 63488 00:19:35.869 }, 00:19:35.869 { 00:19:35.869 "name": null, 00:19:35.869 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:35.869 "is_configured": false, 00:19:35.869 "data_offset": 0, 00:19:35.869 "data_size": 63488 00:19:35.869 }, 00:19:35.869 { 00:19:35.869 "name": "BaseBdev4", 00:19:35.869 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:35.869 "is_configured": true, 00:19:35.869 "data_offset": 2048, 00:19:35.869 "data_size": 63488 00:19:35.869 } 00:19:35.869 ] 00:19:35.869 }' 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.869 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.127 [2024-09-27 22:36:31.980155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.127 22:36:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.384 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.384 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.384 "name": "Existed_Raid", 00:19:36.384 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:36.384 "strip_size_kb": 64, 00:19:36.384 "state": "configuring", 00:19:36.384 "raid_level": "raid5f", 00:19:36.384 "superblock": true, 00:19:36.384 "num_base_bdevs": 4, 00:19:36.384 "num_base_bdevs_discovered": 3, 00:19:36.384 "num_base_bdevs_operational": 4, 00:19:36.384 "base_bdevs_list": [ 00:19:36.384 { 00:19:36.384 "name": "BaseBdev1", 00:19:36.384 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:36.384 "is_configured": true, 00:19:36.384 "data_offset": 2048, 00:19:36.384 "data_size": 63488 00:19:36.384 }, 00:19:36.384 { 00:19:36.384 "name": null, 00:19:36.384 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:36.384 "is_configured": false, 00:19:36.384 "data_offset": 0, 00:19:36.384 "data_size": 63488 00:19:36.384 }, 00:19:36.384 { 00:19:36.384 "name": "BaseBdev3", 00:19:36.384 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:36.384 "is_configured": true, 00:19:36.384 "data_offset": 2048, 00:19:36.384 "data_size": 63488 00:19:36.384 }, 00:19:36.384 { 00:19:36.384 "name": "BaseBdev4", 00:19:36.384 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:36.384 "is_configured": true, 00:19:36.384 "data_offset": 2048, 00:19:36.384 "data_size": 63488 00:19:36.384 } 00:19:36.384 ] 00:19:36.384 }' 00:19:36.384 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.384 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.641 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.641 [2024-09-27 22:36:32.456150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.898 "name": "Existed_Raid", 00:19:36.898 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:36.898 "strip_size_kb": 64, 00:19:36.898 "state": "configuring", 00:19:36.898 "raid_level": "raid5f", 00:19:36.898 "superblock": true, 00:19:36.898 "num_base_bdevs": 4, 00:19:36.898 "num_base_bdevs_discovered": 2, 00:19:36.898 "num_base_bdevs_operational": 4, 00:19:36.898 "base_bdevs_list": [ 00:19:36.898 { 00:19:36.898 "name": null, 00:19:36.898 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:36.898 "is_configured": false, 00:19:36.898 "data_offset": 0, 00:19:36.898 "data_size": 63488 00:19:36.898 }, 00:19:36.898 { 00:19:36.898 "name": null, 00:19:36.898 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:36.898 "is_configured": false, 00:19:36.898 "data_offset": 0, 00:19:36.898 "data_size": 63488 00:19:36.898 }, 00:19:36.898 { 00:19:36.898 "name": "BaseBdev3", 00:19:36.898 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:36.898 "is_configured": true, 00:19:36.898 "data_offset": 2048, 00:19:36.898 "data_size": 63488 00:19:36.898 }, 00:19:36.898 { 00:19:36.898 "name": "BaseBdev4", 00:19:36.898 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:36.898 "is_configured": true, 00:19:36.898 "data_offset": 2048, 00:19:36.898 "data_size": 63488 00:19:36.898 } 00:19:36.898 ] 00:19:36.898 }' 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.898 22:36:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.156 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.156 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.156 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.156 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:37.156 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 [2024-09-27 22:36:33.052152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.414 "name": "Existed_Raid", 00:19:37.414 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:37.414 "strip_size_kb": 64, 00:19:37.414 "state": "configuring", 00:19:37.414 "raid_level": "raid5f", 00:19:37.414 "superblock": true, 00:19:37.414 "num_base_bdevs": 4, 00:19:37.414 "num_base_bdevs_discovered": 3, 00:19:37.414 "num_base_bdevs_operational": 4, 00:19:37.414 "base_bdevs_list": [ 00:19:37.414 { 00:19:37.414 "name": null, 00:19:37.414 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:37.414 "is_configured": false, 00:19:37.414 "data_offset": 0, 00:19:37.414 "data_size": 63488 00:19:37.414 }, 00:19:37.414 { 00:19:37.414 "name": "BaseBdev2", 00:19:37.414 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:37.414 "is_configured": true, 00:19:37.414 "data_offset": 2048, 00:19:37.414 "data_size": 63488 00:19:37.414 }, 00:19:37.414 { 00:19:37.414 "name": "BaseBdev3", 00:19:37.414 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:37.414 "is_configured": true, 00:19:37.414 "data_offset": 2048, 00:19:37.414 "data_size": 63488 00:19:37.414 }, 00:19:37.414 { 00:19:37.414 "name": "BaseBdev4", 00:19:37.414 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:37.414 "is_configured": true, 00:19:37.414 "data_offset": 2048, 00:19:37.414 "data_size": 63488 00:19:37.414 } 00:19:37.414 ] 00:19:37.414 }' 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.414 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 32385b5e-f854-48a7-8c54-a70c1596b970 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.672 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.929 [2024-09-27 22:36:33.562205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:37.929 [2024-09-27 22:36:33.562473] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:37.929 [2024-09-27 22:36:33.562488] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:37.929 [2024-09-27 22:36:33.562754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:37.929 NewBaseBdev 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.929 [2024-09-27 22:36:33.570718] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:37.929 [2024-09-27 22:36:33.570879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:37.929 [2024-09-27 22:36:33.571273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.929 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.929 [ 00:19:37.929 { 00:19:37.929 "name": "NewBaseBdev", 00:19:37.929 "aliases": [ 00:19:37.929 "32385b5e-f854-48a7-8c54-a70c1596b970" 00:19:37.929 ], 00:19:37.929 "product_name": "Malloc disk", 00:19:37.929 "block_size": 512, 00:19:37.929 "num_blocks": 65536, 00:19:37.929 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:37.929 "assigned_rate_limits": { 00:19:37.929 "rw_ios_per_sec": 0, 00:19:37.929 "rw_mbytes_per_sec": 0, 00:19:37.929 "r_mbytes_per_sec": 0, 00:19:37.929 "w_mbytes_per_sec": 0 00:19:37.929 }, 00:19:37.929 "claimed": true, 00:19:37.929 "claim_type": "exclusive_write", 00:19:37.929 "zoned": false, 00:19:37.929 "supported_io_types": { 00:19:37.929 "read": true, 00:19:37.930 "write": true, 00:19:37.930 "unmap": true, 00:19:37.930 "flush": true, 00:19:37.930 "reset": true, 00:19:37.930 "nvme_admin": false, 00:19:37.930 "nvme_io": false, 00:19:37.930 "nvme_io_md": false, 00:19:37.930 "write_zeroes": true, 00:19:37.930 "zcopy": true, 00:19:37.930 "get_zone_info": false, 00:19:37.930 "zone_management": false, 00:19:37.930 "zone_append": false, 00:19:37.930 "compare": false, 00:19:37.930 "compare_and_write": false, 00:19:37.930 "abort": true, 00:19:37.930 "seek_hole": false, 00:19:37.930 "seek_data": false, 00:19:37.930 "copy": true, 00:19:37.930 "nvme_iov_md": false 00:19:37.930 }, 00:19:37.930 "memory_domains": [ 00:19:37.930 { 00:19:37.930 "dma_device_id": "system", 00:19:37.930 "dma_device_type": 1 00:19:37.930 }, 00:19:37.930 { 00:19:37.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.930 "dma_device_type": 2 00:19:37.930 } 00:19:37.930 ], 00:19:37.930 "driver_specific": {} 00:19:37.930 } 00:19:37.930 ] 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.930 "name": "Existed_Raid", 00:19:37.930 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:37.930 "strip_size_kb": 64, 00:19:37.930 "state": "online", 00:19:37.930 "raid_level": "raid5f", 00:19:37.930 "superblock": true, 00:19:37.930 "num_base_bdevs": 4, 00:19:37.930 "num_base_bdevs_discovered": 4, 00:19:37.930 "num_base_bdevs_operational": 4, 00:19:37.930 "base_bdevs_list": [ 00:19:37.930 { 00:19:37.930 "name": "NewBaseBdev", 00:19:37.930 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:37.930 "is_configured": true, 00:19:37.930 "data_offset": 2048, 00:19:37.930 "data_size": 63488 00:19:37.930 }, 00:19:37.930 { 00:19:37.930 "name": "BaseBdev2", 00:19:37.930 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:37.930 "is_configured": true, 00:19:37.930 "data_offset": 2048, 00:19:37.930 "data_size": 63488 00:19:37.930 }, 00:19:37.930 { 00:19:37.930 "name": "BaseBdev3", 00:19:37.930 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:37.930 "is_configured": true, 00:19:37.930 "data_offset": 2048, 00:19:37.930 "data_size": 63488 00:19:37.930 }, 00:19:37.930 { 00:19:37.930 "name": "BaseBdev4", 00:19:37.930 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:37.930 "is_configured": true, 00:19:37.930 "data_offset": 2048, 00:19:37.930 "data_size": 63488 00:19:37.930 } 00:19:37.930 ] 00:19:37.930 }' 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.930 22:36:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.187 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.451 [2024-09-27 22:36:34.071074] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.451 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.451 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.451 "name": "Existed_Raid", 00:19:38.451 "aliases": [ 00:19:38.452 "8db2a176-2a83-4796-a661-4708ac8e8a91" 00:19:38.452 ], 00:19:38.452 "product_name": "Raid Volume", 00:19:38.452 "block_size": 512, 00:19:38.452 "num_blocks": 190464, 00:19:38.452 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:38.452 "assigned_rate_limits": { 00:19:38.452 "rw_ios_per_sec": 0, 00:19:38.452 "rw_mbytes_per_sec": 0, 00:19:38.452 "r_mbytes_per_sec": 0, 00:19:38.452 "w_mbytes_per_sec": 0 00:19:38.452 }, 00:19:38.452 "claimed": false, 00:19:38.452 "zoned": false, 00:19:38.452 "supported_io_types": { 00:19:38.452 "read": true, 00:19:38.452 "write": true, 00:19:38.452 "unmap": false, 00:19:38.452 "flush": false, 00:19:38.452 "reset": true, 00:19:38.452 "nvme_admin": false, 00:19:38.452 "nvme_io": false, 00:19:38.452 "nvme_io_md": false, 00:19:38.452 "write_zeroes": true, 00:19:38.452 "zcopy": false, 00:19:38.452 "get_zone_info": false, 00:19:38.452 "zone_management": false, 00:19:38.452 "zone_append": false, 00:19:38.452 "compare": false, 00:19:38.452 "compare_and_write": false, 00:19:38.452 "abort": false, 00:19:38.452 "seek_hole": false, 00:19:38.452 "seek_data": false, 00:19:38.452 "copy": false, 00:19:38.452 "nvme_iov_md": false 00:19:38.452 }, 00:19:38.452 "driver_specific": { 00:19:38.452 "raid": { 00:19:38.452 "uuid": "8db2a176-2a83-4796-a661-4708ac8e8a91", 00:19:38.452 "strip_size_kb": 64, 00:19:38.452 "state": "online", 00:19:38.452 "raid_level": "raid5f", 00:19:38.452 "superblock": true, 00:19:38.452 "num_base_bdevs": 4, 00:19:38.452 "num_base_bdevs_discovered": 4, 00:19:38.452 "num_base_bdevs_operational": 4, 00:19:38.452 "base_bdevs_list": [ 00:19:38.452 { 00:19:38.452 "name": "NewBaseBdev", 00:19:38.452 "uuid": "32385b5e-f854-48a7-8c54-a70c1596b970", 00:19:38.452 "is_configured": true, 00:19:38.452 "data_offset": 2048, 00:19:38.452 "data_size": 63488 00:19:38.452 }, 00:19:38.452 { 00:19:38.452 "name": "BaseBdev2", 00:19:38.452 "uuid": "f134b5ae-b614-44fd-9025-1cdec8892caa", 00:19:38.452 "is_configured": true, 00:19:38.452 "data_offset": 2048, 00:19:38.452 "data_size": 63488 00:19:38.452 }, 00:19:38.452 { 00:19:38.452 "name": "BaseBdev3", 00:19:38.452 "uuid": "b7c1b15a-8118-47ac-835c-0b856a9efea1", 00:19:38.452 "is_configured": true, 00:19:38.452 "data_offset": 2048, 00:19:38.452 "data_size": 63488 00:19:38.452 }, 00:19:38.452 { 00:19:38.452 "name": "BaseBdev4", 00:19:38.452 "uuid": "f8ede215-3134-42ba-8ef2-98ccaea2d47f", 00:19:38.452 "is_configured": true, 00:19:38.452 "data_offset": 2048, 00:19:38.452 "data_size": 63488 00:19:38.452 } 00:19:38.452 ] 00:19:38.452 } 00:19:38.452 } 00:19:38.452 }' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:38.452 BaseBdev2 00:19:38.452 BaseBdev3 00:19:38.452 BaseBdev4' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.452 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.709 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.710 [2024-09-27 22:36:34.378347] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.710 [2024-09-27 22:36:34.378381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.710 [2024-09-27 22:36:34.378466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.710 [2024-09-27 22:36:34.378784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.710 [2024-09-27 22:36:34.378797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84558 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84558 ']' 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84558 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84558 00:19:38.710 killing process with pid 84558 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84558' 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84558 00:19:38.710 [2024-09-27 22:36:34.430481] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.710 22:36:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84558 00:19:38.967 [2024-09-27 22:36:34.835417] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:41.500 22:36:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:41.500 00:19:41.500 real 0m12.507s 00:19:41.500 user 0m19.046s 00:19:41.500 sys 0m2.499s 00:19:41.500 ************************************ 00:19:41.500 END TEST raid5f_state_function_test_sb 00:19:41.500 ************************************ 00:19:41.500 22:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:41.500 22:36:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.500 22:36:36 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:41.500 22:36:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:41.500 22:36:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:41.500 22:36:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:41.500 ************************************ 00:19:41.500 START TEST raid5f_superblock_test 00:19:41.500 ************************************ 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85235 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85235 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85235 ']' 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.500 22:36:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.500 [2024-09-27 22:36:36.999501] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:19:41.500 [2024-09-27 22:36:36.999846] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85235 ] 00:19:41.500 [2024-09-27 22:36:37.175304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.757 [2024-09-27 22:36:37.409278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.015 [2024-09-27 22:36:37.649516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.015 [2024-09-27 22:36:37.649762] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.272 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.530 malloc1 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.530 [2024-09-27 22:36:38.183770] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:42.530 [2024-09-27 22:36:38.183857] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.530 [2024-09-27 22:36:38.183888] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:42.530 [2024-09-27 22:36:38.183910] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.530 [2024-09-27 22:36:38.186957] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.530 [2024-09-27 22:36:38.187043] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:42.530 pt1 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:42.530 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 malloc2 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 [2024-09-27 22:36:38.246137] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.531 [2024-09-27 22:36:38.246363] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.531 [2024-09-27 22:36:38.246440] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:42.531 [2024-09-27 22:36:38.246538] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.531 [2024-09-27 22:36:38.249476] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.531 [2024-09-27 22:36:38.249638] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.531 pt2 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 malloc3 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 [2024-09-27 22:36:38.312292] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.531 [2024-09-27 22:36:38.312458] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.531 [2024-09-27 22:36:38.312491] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:42.531 [2024-09-27 22:36:38.312503] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.531 [2024-09-27 22:36:38.314992] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.531 [2024-09-27 22:36:38.315061] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.531 pt3 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 malloc4 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 [2024-09-27 22:36:38.376395] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:42.531 [2024-09-27 22:36:38.376573] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.531 [2024-09-27 22:36:38.376633] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:42.531 [2024-09-27 22:36:38.376706] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.531 [2024-09-27 22:36:38.379259] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.531 [2024-09-27 22:36:38.379388] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:42.531 pt4 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.531 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.531 [2024-09-27 22:36:38.392429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:42.531 [2024-09-27 22:36:38.394555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.531 [2024-09-27 22:36:38.394621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.531 [2024-09-27 22:36:38.394687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:42.531 [2024-09-27 22:36:38.394934] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:42.531 [2024-09-27 22:36:38.394968] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:42.531 [2024-09-27 22:36:38.395262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:42.531 [2024-09-27 22:36:38.404299] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:42.531 [2024-09-27 22:36:38.404436] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:42.531 [2024-09-27 22:36:38.404673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.789 "name": "raid_bdev1", 00:19:42.789 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:42.789 "strip_size_kb": 64, 00:19:42.789 "state": "online", 00:19:42.789 "raid_level": "raid5f", 00:19:42.789 "superblock": true, 00:19:42.789 "num_base_bdevs": 4, 00:19:42.789 "num_base_bdevs_discovered": 4, 00:19:42.789 "num_base_bdevs_operational": 4, 00:19:42.789 "base_bdevs_list": [ 00:19:42.789 { 00:19:42.789 "name": "pt1", 00:19:42.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.789 "is_configured": true, 00:19:42.789 "data_offset": 2048, 00:19:42.789 "data_size": 63488 00:19:42.789 }, 00:19:42.789 { 00:19:42.789 "name": "pt2", 00:19:42.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.789 "is_configured": true, 00:19:42.789 "data_offset": 2048, 00:19:42.789 "data_size": 63488 00:19:42.789 }, 00:19:42.789 { 00:19:42.789 "name": "pt3", 00:19:42.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:42.789 "is_configured": true, 00:19:42.789 "data_offset": 2048, 00:19:42.789 "data_size": 63488 00:19:42.789 }, 00:19:42.789 { 00:19:42.789 "name": "pt4", 00:19:42.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:42.789 "is_configured": true, 00:19:42.789 "data_offset": 2048, 00:19:42.789 "data_size": 63488 00:19:42.789 } 00:19:42.789 ] 00:19:42.789 }' 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.789 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.047 [2024-09-27 22:36:38.872359] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:43.047 "name": "raid_bdev1", 00:19:43.047 "aliases": [ 00:19:43.047 "03a35c7f-1d2c-4480-845e-2e45c7f1d55f" 00:19:43.047 ], 00:19:43.047 "product_name": "Raid Volume", 00:19:43.047 "block_size": 512, 00:19:43.047 "num_blocks": 190464, 00:19:43.047 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:43.047 "assigned_rate_limits": { 00:19:43.047 "rw_ios_per_sec": 0, 00:19:43.047 "rw_mbytes_per_sec": 0, 00:19:43.047 "r_mbytes_per_sec": 0, 00:19:43.047 "w_mbytes_per_sec": 0 00:19:43.047 }, 00:19:43.047 "claimed": false, 00:19:43.047 "zoned": false, 00:19:43.047 "supported_io_types": { 00:19:43.047 "read": true, 00:19:43.047 "write": true, 00:19:43.047 "unmap": false, 00:19:43.047 "flush": false, 00:19:43.047 "reset": true, 00:19:43.047 "nvme_admin": false, 00:19:43.047 "nvme_io": false, 00:19:43.047 "nvme_io_md": false, 00:19:43.047 "write_zeroes": true, 00:19:43.047 "zcopy": false, 00:19:43.047 "get_zone_info": false, 00:19:43.047 "zone_management": false, 00:19:43.047 "zone_append": false, 00:19:43.047 "compare": false, 00:19:43.047 "compare_and_write": false, 00:19:43.047 "abort": false, 00:19:43.047 "seek_hole": false, 00:19:43.047 "seek_data": false, 00:19:43.047 "copy": false, 00:19:43.047 "nvme_iov_md": false 00:19:43.047 }, 00:19:43.047 "driver_specific": { 00:19:43.047 "raid": { 00:19:43.047 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:43.047 "strip_size_kb": 64, 00:19:43.047 "state": "online", 00:19:43.047 "raid_level": "raid5f", 00:19:43.047 "superblock": true, 00:19:43.047 "num_base_bdevs": 4, 00:19:43.047 "num_base_bdevs_discovered": 4, 00:19:43.047 "num_base_bdevs_operational": 4, 00:19:43.047 "base_bdevs_list": [ 00:19:43.047 { 00:19:43.047 "name": "pt1", 00:19:43.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:43.047 "is_configured": true, 00:19:43.047 "data_offset": 2048, 00:19:43.047 "data_size": 63488 00:19:43.047 }, 00:19:43.047 { 00:19:43.047 "name": "pt2", 00:19:43.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.047 "is_configured": true, 00:19:43.047 "data_offset": 2048, 00:19:43.047 "data_size": 63488 00:19:43.047 }, 00:19:43.047 { 00:19:43.047 "name": "pt3", 00:19:43.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.047 "is_configured": true, 00:19:43.047 "data_offset": 2048, 00:19:43.047 "data_size": 63488 00:19:43.047 }, 00:19:43.047 { 00:19:43.047 "name": "pt4", 00:19:43.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:43.047 "is_configured": true, 00:19:43.047 "data_offset": 2048, 00:19:43.047 "data_size": 63488 00:19:43.047 } 00:19:43.047 ] 00:19:43.047 } 00:19:43.047 } 00:19:43.047 }' 00:19:43.047 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:43.306 pt2 00:19:43.306 pt3 00:19:43.306 pt4' 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.306 22:36:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.306 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:43.565 [2024-09-27 22:36:39.196333] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=03a35c7f-1d2c-4480-845e-2e45c7f1d55f 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 03a35c7f-1d2c-4480-845e-2e45c7f1d55f ']' 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 [2024-09-27 22:36:39.236131] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.565 [2024-09-27 22:36:39.236162] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.565 [2024-09-27 22:36:39.236246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.565 [2024-09-27 22:36:39.236329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.565 [2024-09-27 22:36:39.236346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 [2024-09-27 22:36:39.416162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:43.565 [2024-09-27 22:36:39.418356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:43.565 [2024-09-27 22:36:39.418402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:43.565 [2024-09-27 22:36:39.418435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:43.565 [2024-09-27 22:36:39.418485] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:43.565 [2024-09-27 22:36:39.418544] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:43.565 [2024-09-27 22:36:39.418566] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:43.565 [2024-09-27 22:36:39.418588] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:43.565 [2024-09-27 22:36:39.418604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.565 [2024-09-27 22:36:39.418621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:43.565 request: 00:19:43.565 { 00:19:43.565 "name": "raid_bdev1", 00:19:43.565 "raid_level": "raid5f", 00:19:43.565 "base_bdevs": [ 00:19:43.565 "malloc1", 00:19:43.565 "malloc2", 00:19:43.565 "malloc3", 00:19:43.565 "malloc4" 00:19:43.565 ], 00:19:43.565 "strip_size_kb": 64, 00:19:43.565 "superblock": false, 00:19:43.565 "method": "bdev_raid_create", 00:19:43.565 "req_id": 1 00:19:43.565 } 00:19:43.565 Got JSON-RPC error response 00:19:43.565 response: 00:19:43.565 { 00:19:43.565 "code": -17, 00:19:43.565 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:43.565 } 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.565 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.823 [2024-09-27 22:36:39.484163] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:43.823 [2024-09-27 22:36:39.484372] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.823 [2024-09-27 22:36:39.484403] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:43.823 [2024-09-27 22:36:39.484419] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.823 [2024-09-27 22:36:39.487084] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.823 [2024-09-27 22:36:39.487131] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:43.823 [2024-09-27 22:36:39.487223] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:43.823 [2024-09-27 22:36:39.487290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:43.823 pt1 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.823 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.823 "name": "raid_bdev1", 00:19:43.823 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:43.823 "strip_size_kb": 64, 00:19:43.823 "state": "configuring", 00:19:43.823 "raid_level": "raid5f", 00:19:43.823 "superblock": true, 00:19:43.823 "num_base_bdevs": 4, 00:19:43.823 "num_base_bdevs_discovered": 1, 00:19:43.823 "num_base_bdevs_operational": 4, 00:19:43.823 "base_bdevs_list": [ 00:19:43.823 { 00:19:43.823 "name": "pt1", 00:19:43.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:43.823 "is_configured": true, 00:19:43.823 "data_offset": 2048, 00:19:43.823 "data_size": 63488 00:19:43.823 }, 00:19:43.823 { 00:19:43.823 "name": null, 00:19:43.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.823 "is_configured": false, 00:19:43.823 "data_offset": 2048, 00:19:43.823 "data_size": 63488 00:19:43.823 }, 00:19:43.823 { 00:19:43.823 "name": null, 00:19:43.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.823 "is_configured": false, 00:19:43.823 "data_offset": 2048, 00:19:43.823 "data_size": 63488 00:19:43.823 }, 00:19:43.823 { 00:19:43.823 "name": null, 00:19:43.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:43.823 "is_configured": false, 00:19:43.823 "data_offset": 2048, 00:19:43.824 "data_size": 63488 00:19:43.824 } 00:19:43.824 ] 00:19:43.824 }' 00:19:43.824 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.824 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.082 [2024-09-27 22:36:39.940124] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:44.082 [2024-09-27 22:36:39.940325] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.082 [2024-09-27 22:36:39.940380] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:44.082 [2024-09-27 22:36:39.940462] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.082 [2024-09-27 22:36:39.940965] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.082 [2024-09-27 22:36:39.941116] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:44.082 [2024-09-27 22:36:39.941288] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:44.082 [2024-09-27 22:36:39.941326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.082 pt2 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.082 [2024-09-27 22:36:39.952191] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.082 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.340 22:36:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.340 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.340 "name": "raid_bdev1", 00:19:44.340 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:44.340 "strip_size_kb": 64, 00:19:44.340 "state": "configuring", 00:19:44.340 "raid_level": "raid5f", 00:19:44.340 "superblock": true, 00:19:44.340 "num_base_bdevs": 4, 00:19:44.340 "num_base_bdevs_discovered": 1, 00:19:44.340 "num_base_bdevs_operational": 4, 00:19:44.340 "base_bdevs_list": [ 00:19:44.340 { 00:19:44.340 "name": "pt1", 00:19:44.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:44.340 "is_configured": true, 00:19:44.340 "data_offset": 2048, 00:19:44.340 "data_size": 63488 00:19:44.340 }, 00:19:44.340 { 00:19:44.340 "name": null, 00:19:44.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:44.340 "is_configured": false, 00:19:44.340 "data_offset": 0, 00:19:44.340 "data_size": 63488 00:19:44.340 }, 00:19:44.340 { 00:19:44.340 "name": null, 00:19:44.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:44.340 "is_configured": false, 00:19:44.340 "data_offset": 2048, 00:19:44.340 "data_size": 63488 00:19:44.340 }, 00:19:44.340 { 00:19:44.340 "name": null, 00:19:44.340 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:44.340 "is_configured": false, 00:19:44.340 "data_offset": 2048, 00:19:44.340 "data_size": 63488 00:19:44.340 } 00:19:44.340 ] 00:19:44.340 }' 00:19:44.340 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.340 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 [2024-09-27 22:36:40.376127] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:44.599 [2024-09-27 22:36:40.376342] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.599 [2024-09-27 22:36:40.376377] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:44.599 [2024-09-27 22:36:40.376390] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.599 [2024-09-27 22:36:40.376885] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.599 [2024-09-27 22:36:40.376903] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:44.599 [2024-09-27 22:36:40.376989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:44.599 [2024-09-27 22:36:40.377025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.599 pt2 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 [2024-09-27 22:36:40.388131] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:44.599 [2024-09-27 22:36:40.388189] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.599 [2024-09-27 22:36:40.388219] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:44.599 [2024-09-27 22:36:40.388231] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.599 [2024-09-27 22:36:40.388647] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.599 [2024-09-27 22:36:40.388665] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:44.599 [2024-09-27 22:36:40.388740] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:44.599 [2024-09-27 22:36:40.388767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:44.599 pt3 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 [2024-09-27 22:36:40.400086] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:44.599 [2024-09-27 22:36:40.400142] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.599 [2024-09-27 22:36:40.400166] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:44.599 [2024-09-27 22:36:40.400177] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.599 [2024-09-27 22:36:40.400591] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.599 [2024-09-27 22:36:40.400608] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:44.599 [2024-09-27 22:36:40.400679] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:44.599 [2024-09-27 22:36:40.400698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:44.599 [2024-09-27 22:36:40.400832] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:44.599 [2024-09-27 22:36:40.400842] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:44.599 [2024-09-27 22:36:40.401105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:44.599 [2024-09-27 22:36:40.408809] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:44.599 [2024-09-27 22:36:40.408836] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:44.599 [2024-09-27 22:36:40.409036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.599 pt4 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.599 "name": "raid_bdev1", 00:19:44.599 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:44.599 "strip_size_kb": 64, 00:19:44.599 "state": "online", 00:19:44.599 "raid_level": "raid5f", 00:19:44.599 "superblock": true, 00:19:44.599 "num_base_bdevs": 4, 00:19:44.599 "num_base_bdevs_discovered": 4, 00:19:44.599 "num_base_bdevs_operational": 4, 00:19:44.599 "base_bdevs_list": [ 00:19:44.599 { 00:19:44.599 "name": "pt1", 00:19:44.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:44.599 "is_configured": true, 00:19:44.599 "data_offset": 2048, 00:19:44.599 "data_size": 63488 00:19:44.599 }, 00:19:44.599 { 00:19:44.599 "name": "pt2", 00:19:44.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:44.599 "is_configured": true, 00:19:44.599 "data_offset": 2048, 00:19:44.599 "data_size": 63488 00:19:44.599 }, 00:19:44.599 { 00:19:44.599 "name": "pt3", 00:19:44.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:44.599 "is_configured": true, 00:19:44.599 "data_offset": 2048, 00:19:44.599 "data_size": 63488 00:19:44.599 }, 00:19:44.599 { 00:19:44.599 "name": "pt4", 00:19:44.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:44.599 "is_configured": true, 00:19:44.599 "data_offset": 2048, 00:19:44.599 "data_size": 63488 00:19:44.599 } 00:19:44.599 ] 00:19:44.599 }' 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.599 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.166 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:45.166 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:45.166 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:45.166 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:45.166 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.167 [2024-09-27 22:36:40.832556] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.167 "name": "raid_bdev1", 00:19:45.167 "aliases": [ 00:19:45.167 "03a35c7f-1d2c-4480-845e-2e45c7f1d55f" 00:19:45.167 ], 00:19:45.167 "product_name": "Raid Volume", 00:19:45.167 "block_size": 512, 00:19:45.167 "num_blocks": 190464, 00:19:45.167 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:45.167 "assigned_rate_limits": { 00:19:45.167 "rw_ios_per_sec": 0, 00:19:45.167 "rw_mbytes_per_sec": 0, 00:19:45.167 "r_mbytes_per_sec": 0, 00:19:45.167 "w_mbytes_per_sec": 0 00:19:45.167 }, 00:19:45.167 "claimed": false, 00:19:45.167 "zoned": false, 00:19:45.167 "supported_io_types": { 00:19:45.167 "read": true, 00:19:45.167 "write": true, 00:19:45.167 "unmap": false, 00:19:45.167 "flush": false, 00:19:45.167 "reset": true, 00:19:45.167 "nvme_admin": false, 00:19:45.167 "nvme_io": false, 00:19:45.167 "nvme_io_md": false, 00:19:45.167 "write_zeroes": true, 00:19:45.167 "zcopy": false, 00:19:45.167 "get_zone_info": false, 00:19:45.167 "zone_management": false, 00:19:45.167 "zone_append": false, 00:19:45.167 "compare": false, 00:19:45.167 "compare_and_write": false, 00:19:45.167 "abort": false, 00:19:45.167 "seek_hole": false, 00:19:45.167 "seek_data": false, 00:19:45.167 "copy": false, 00:19:45.167 "nvme_iov_md": false 00:19:45.167 }, 00:19:45.167 "driver_specific": { 00:19:45.167 "raid": { 00:19:45.167 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:45.167 "strip_size_kb": 64, 00:19:45.167 "state": "online", 00:19:45.167 "raid_level": "raid5f", 00:19:45.167 "superblock": true, 00:19:45.167 "num_base_bdevs": 4, 00:19:45.167 "num_base_bdevs_discovered": 4, 00:19:45.167 "num_base_bdevs_operational": 4, 00:19:45.167 "base_bdevs_list": [ 00:19:45.167 { 00:19:45.167 "name": "pt1", 00:19:45.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:45.167 "is_configured": true, 00:19:45.167 "data_offset": 2048, 00:19:45.167 "data_size": 63488 00:19:45.167 }, 00:19:45.167 { 00:19:45.167 "name": "pt2", 00:19:45.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:45.167 "is_configured": true, 00:19:45.167 "data_offset": 2048, 00:19:45.167 "data_size": 63488 00:19:45.167 }, 00:19:45.167 { 00:19:45.167 "name": "pt3", 00:19:45.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:45.167 "is_configured": true, 00:19:45.167 "data_offset": 2048, 00:19:45.167 "data_size": 63488 00:19:45.167 }, 00:19:45.167 { 00:19:45.167 "name": "pt4", 00:19:45.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:45.167 "is_configured": true, 00:19:45.167 "data_offset": 2048, 00:19:45.167 "data_size": 63488 00:19:45.167 } 00:19:45.167 ] 00:19:45.167 } 00:19:45.167 } 00:19:45.167 }' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:45.167 pt2 00:19:45.167 pt3 00:19:45.167 pt4' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.167 22:36:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.167 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:45.167 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.167 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.167 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.167 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:45.426 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.426 [2024-09-27 22:36:41.176300] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 03a35c7f-1d2c-4480-845e-2e45c7f1d55f '!=' 03a35c7f-1d2c-4480-845e-2e45c7f1d55f ']' 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.427 [2024-09-27 22:36:41.216174] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.427 "name": "raid_bdev1", 00:19:45.427 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:45.427 "strip_size_kb": 64, 00:19:45.427 "state": "online", 00:19:45.427 "raid_level": "raid5f", 00:19:45.427 "superblock": true, 00:19:45.427 "num_base_bdevs": 4, 00:19:45.427 "num_base_bdevs_discovered": 3, 00:19:45.427 "num_base_bdevs_operational": 3, 00:19:45.427 "base_bdevs_list": [ 00:19:45.427 { 00:19:45.427 "name": null, 00:19:45.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.427 "is_configured": false, 00:19:45.427 "data_offset": 0, 00:19:45.427 "data_size": 63488 00:19:45.427 }, 00:19:45.427 { 00:19:45.427 "name": "pt2", 00:19:45.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:45.427 "is_configured": true, 00:19:45.427 "data_offset": 2048, 00:19:45.427 "data_size": 63488 00:19:45.427 }, 00:19:45.427 { 00:19:45.427 "name": "pt3", 00:19:45.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:45.427 "is_configured": true, 00:19:45.427 "data_offset": 2048, 00:19:45.427 "data_size": 63488 00:19:45.427 }, 00:19:45.427 { 00:19:45.427 "name": "pt4", 00:19:45.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:45.427 "is_configured": true, 00:19:45.427 "data_offset": 2048, 00:19:45.427 "data_size": 63488 00:19:45.427 } 00:19:45.427 ] 00:19:45.427 }' 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.427 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 [2024-09-27 22:36:41.648146] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:45.996 [2024-09-27 22:36:41.648183] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.996 [2024-09-27 22:36:41.648261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.996 [2024-09-27 22:36:41.648342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.996 [2024-09-27 22:36:41.648354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 [2024-09-27 22:36:41.748156] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:45.996 [2024-09-27 22:36:41.748346] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.996 [2024-09-27 22:36:41.748381] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:45.996 [2024-09-27 22:36:41.748394] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.996 [2024-09-27 22:36:41.750919] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.996 [2024-09-27 22:36:41.750962] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:45.996 [2024-09-27 22:36:41.751076] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:45.996 [2024-09-27 22:36:41.751122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:45.996 pt2 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.996 "name": "raid_bdev1", 00:19:45.996 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:45.996 "strip_size_kb": 64, 00:19:45.996 "state": "configuring", 00:19:45.996 "raid_level": "raid5f", 00:19:45.996 "superblock": true, 00:19:45.996 "num_base_bdevs": 4, 00:19:45.996 "num_base_bdevs_discovered": 1, 00:19:45.996 "num_base_bdevs_operational": 3, 00:19:45.996 "base_bdevs_list": [ 00:19:45.996 { 00:19:45.996 "name": null, 00:19:45.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.996 "is_configured": false, 00:19:45.996 "data_offset": 2048, 00:19:45.996 "data_size": 63488 00:19:45.996 }, 00:19:45.996 { 00:19:45.996 "name": "pt2", 00:19:45.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:45.996 "is_configured": true, 00:19:45.996 "data_offset": 2048, 00:19:45.996 "data_size": 63488 00:19:45.996 }, 00:19:45.996 { 00:19:45.996 "name": null, 00:19:45.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:45.996 "is_configured": false, 00:19:45.996 "data_offset": 2048, 00:19:45.996 "data_size": 63488 00:19:45.996 }, 00:19:45.996 { 00:19:45.996 "name": null, 00:19:45.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:45.996 "is_configured": false, 00:19:45.996 "data_offset": 2048, 00:19:45.996 "data_size": 63488 00:19:45.996 } 00:19:45.996 ] 00:19:45.996 }' 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.996 22:36:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 [2024-09-27 22:36:42.200142] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:46.565 [2024-09-27 22:36:42.200353] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.565 [2024-09-27 22:36:42.200413] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:46.565 [2024-09-27 22:36:42.200632] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.565 [2024-09-27 22:36:42.201126] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.565 [2024-09-27 22:36:42.201147] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:46.565 [2024-09-27 22:36:42.201237] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:46.565 [2024-09-27 22:36:42.201265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:46.565 pt3 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.565 "name": "raid_bdev1", 00:19:46.565 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:46.565 "strip_size_kb": 64, 00:19:46.565 "state": "configuring", 00:19:46.565 "raid_level": "raid5f", 00:19:46.565 "superblock": true, 00:19:46.565 "num_base_bdevs": 4, 00:19:46.565 "num_base_bdevs_discovered": 2, 00:19:46.565 "num_base_bdevs_operational": 3, 00:19:46.565 "base_bdevs_list": [ 00:19:46.565 { 00:19:46.565 "name": null, 00:19:46.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.565 "is_configured": false, 00:19:46.565 "data_offset": 2048, 00:19:46.565 "data_size": 63488 00:19:46.565 }, 00:19:46.565 { 00:19:46.565 "name": "pt2", 00:19:46.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:46.565 "is_configured": true, 00:19:46.565 "data_offset": 2048, 00:19:46.565 "data_size": 63488 00:19:46.565 }, 00:19:46.565 { 00:19:46.565 "name": "pt3", 00:19:46.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:46.565 "is_configured": true, 00:19:46.565 "data_offset": 2048, 00:19:46.565 "data_size": 63488 00:19:46.565 }, 00:19:46.565 { 00:19:46.565 "name": null, 00:19:46.565 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:46.565 "is_configured": false, 00:19:46.565 "data_offset": 2048, 00:19:46.565 "data_size": 63488 00:19:46.565 } 00:19:46.565 ] 00:19:46.565 }' 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.565 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.824 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:46.824 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:46.824 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:46.824 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:46.824 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.824 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.824 [2024-09-27 22:36:42.628131] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:46.824 [2024-09-27 22:36:42.628197] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.824 [2024-09-27 22:36:42.628223] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:46.824 [2024-09-27 22:36:42.628235] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.824 [2024-09-27 22:36:42.628695] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.824 [2024-09-27 22:36:42.628714] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:46.824 [2024-09-27 22:36:42.628797] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:46.824 [2024-09-27 22:36:42.628818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:46.824 [2024-09-27 22:36:42.628947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:46.824 [2024-09-27 22:36:42.628957] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:46.824 [2024-09-27 22:36:42.629229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:46.824 [2024-09-27 22:36:42.637815] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:46.825 [2024-09-27 22:36:42.637842] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:46.825 [2024-09-27 22:36:42.638141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.825 pt4 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.825 "name": "raid_bdev1", 00:19:46.825 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:46.825 "strip_size_kb": 64, 00:19:46.825 "state": "online", 00:19:46.825 "raid_level": "raid5f", 00:19:46.825 "superblock": true, 00:19:46.825 "num_base_bdevs": 4, 00:19:46.825 "num_base_bdevs_discovered": 3, 00:19:46.825 "num_base_bdevs_operational": 3, 00:19:46.825 "base_bdevs_list": [ 00:19:46.825 { 00:19:46.825 "name": null, 00:19:46.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.825 "is_configured": false, 00:19:46.825 "data_offset": 2048, 00:19:46.825 "data_size": 63488 00:19:46.825 }, 00:19:46.825 { 00:19:46.825 "name": "pt2", 00:19:46.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:46.825 "is_configured": true, 00:19:46.825 "data_offset": 2048, 00:19:46.825 "data_size": 63488 00:19:46.825 }, 00:19:46.825 { 00:19:46.825 "name": "pt3", 00:19:46.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:46.825 "is_configured": true, 00:19:46.825 "data_offset": 2048, 00:19:46.825 "data_size": 63488 00:19:46.825 }, 00:19:46.825 { 00:19:46.825 "name": "pt4", 00:19:46.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:46.825 "is_configured": true, 00:19:46.825 "data_offset": 2048, 00:19:46.825 "data_size": 63488 00:19:46.825 } 00:19:46.825 ] 00:19:46.825 }' 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.825 22:36:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 [2024-09-27 22:36:43.018032] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.394 [2024-09-27 22:36:43.018224] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.394 [2024-09-27 22:36:43.018333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.394 [2024-09-27 22:36:43.018416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.394 [2024-09-27 22:36:43.018434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 [2024-09-27 22:36:43.081929] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:47.394 [2024-09-27 22:36:43.082023] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.394 [2024-09-27 22:36:43.082047] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:47.394 [2024-09-27 22:36:43.082063] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.394 [2024-09-27 22:36:43.084845] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.394 [2024-09-27 22:36:43.085044] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:47.394 [2024-09-27 22:36:43.085160] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:47.394 [2024-09-27 22:36:43.085233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:47.394 [2024-09-27 22:36:43.085363] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:47.394 [2024-09-27 22:36:43.085383] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.394 [2024-09-27 22:36:43.085402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:47.394 [2024-09-27 22:36:43.085479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:47.394 [2024-09-27 22:36:43.085592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:47.394 pt1 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.394 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.395 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.395 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.395 "name": "raid_bdev1", 00:19:47.395 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:47.395 "strip_size_kb": 64, 00:19:47.395 "state": "configuring", 00:19:47.395 "raid_level": "raid5f", 00:19:47.395 "superblock": true, 00:19:47.395 "num_base_bdevs": 4, 00:19:47.395 "num_base_bdevs_discovered": 2, 00:19:47.395 "num_base_bdevs_operational": 3, 00:19:47.395 "base_bdevs_list": [ 00:19:47.395 { 00:19:47.395 "name": null, 00:19:47.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.395 "is_configured": false, 00:19:47.395 "data_offset": 2048, 00:19:47.395 "data_size": 63488 00:19:47.395 }, 00:19:47.395 { 00:19:47.395 "name": "pt2", 00:19:47.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.395 "is_configured": true, 00:19:47.395 "data_offset": 2048, 00:19:47.395 "data_size": 63488 00:19:47.395 }, 00:19:47.395 { 00:19:47.395 "name": "pt3", 00:19:47.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:47.395 "is_configured": true, 00:19:47.395 "data_offset": 2048, 00:19:47.395 "data_size": 63488 00:19:47.395 }, 00:19:47.395 { 00:19:47.395 "name": null, 00:19:47.395 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:47.395 "is_configured": false, 00:19:47.395 "data_offset": 2048, 00:19:47.395 "data_size": 63488 00:19:47.395 } 00:19:47.395 ] 00:19:47.395 }' 00:19:47.395 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.395 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.655 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 [2024-09-27 22:36:43.533390] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:47.914 [2024-09-27 22:36:43.533459] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.914 [2024-09-27 22:36:43.533486] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:47.914 [2024-09-27 22:36:43.533498] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.914 [2024-09-27 22:36:43.534038] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.914 [2024-09-27 22:36:43.534059] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:47.914 [2024-09-27 22:36:43.534166] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:47.914 [2024-09-27 22:36:43.534196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:47.914 [2024-09-27 22:36:43.534366] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:47.914 [2024-09-27 22:36:43.534377] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:47.914 [2024-09-27 22:36:43.534651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:47.914 [2024-09-27 22:36:43.543622] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:47.914 [2024-09-27 22:36:43.543653] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:47.914 [2024-09-27 22:36:43.543938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.914 pt4 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.914 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.914 "name": "raid_bdev1", 00:19:47.914 "uuid": "03a35c7f-1d2c-4480-845e-2e45c7f1d55f", 00:19:47.914 "strip_size_kb": 64, 00:19:47.914 "state": "online", 00:19:47.914 "raid_level": "raid5f", 00:19:47.914 "superblock": true, 00:19:47.914 "num_base_bdevs": 4, 00:19:47.914 "num_base_bdevs_discovered": 3, 00:19:47.914 "num_base_bdevs_operational": 3, 00:19:47.914 "base_bdevs_list": [ 00:19:47.914 { 00:19:47.914 "name": null, 00:19:47.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.914 "is_configured": false, 00:19:47.914 "data_offset": 2048, 00:19:47.914 "data_size": 63488 00:19:47.914 }, 00:19:47.914 { 00:19:47.914 "name": "pt2", 00:19:47.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.914 "is_configured": true, 00:19:47.914 "data_offset": 2048, 00:19:47.914 "data_size": 63488 00:19:47.914 }, 00:19:47.914 { 00:19:47.914 "name": "pt3", 00:19:47.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:47.914 "is_configured": true, 00:19:47.914 "data_offset": 2048, 00:19:47.914 "data_size": 63488 00:19:47.914 }, 00:19:47.914 { 00:19:47.914 "name": "pt4", 00:19:47.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:47.914 "is_configured": true, 00:19:47.914 "data_offset": 2048, 00:19:47.914 "data_size": 63488 00:19:47.914 } 00:19:47.914 ] 00:19:47.914 }' 00:19:47.915 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.915 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.174 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.175 22:36:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.175 [2024-09-27 22:36:44.004359] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 03a35c7f-1d2c-4480-845e-2e45c7f1d55f '!=' 03a35c7f-1d2c-4480-845e-2e45c7f1d55f ']' 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85235 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85235 ']' 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85235 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:48.175 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.434 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85235 00:19:48.434 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:48.434 killing process with pid 85235 00:19:48.434 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:48.434 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85235' 00:19:48.434 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 85235 00:19:48.434 [2024-09-27 22:36:44.083538] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.434 22:36:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 85235 00:19:48.434 [2024-09-27 22:36:44.083652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.434 [2024-09-27 22:36:44.083731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.434 [2024-09-27 22:36:44.083746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:48.692 [2024-09-27 22:36:44.505115] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.229 22:36:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:51.229 00:19:51.229 real 0m9.592s 00:19:51.229 user 0m14.281s 00:19:51.229 sys 0m1.939s 00:19:51.229 22:36:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:51.229 22:36:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.229 ************************************ 00:19:51.229 END TEST raid5f_superblock_test 00:19:51.229 ************************************ 00:19:51.229 22:36:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:51.230 22:36:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:51.230 22:36:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:51.230 22:36:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:51.230 22:36:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.230 ************************************ 00:19:51.230 START TEST raid5f_rebuild_test 00:19:51.230 ************************************ 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85731 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85731 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85731 ']' 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.230 22:36:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.230 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:51.230 Zero copy mechanism will not be used. 00:19:51.230 [2024-09-27 22:36:46.660374] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:19:51.230 [2024-09-27 22:36:46.660500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85731 ] 00:19:51.230 [2024-09-27 22:36:46.828071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.230 [2024-09-27 22:36:47.051022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.490 [2024-09-27 22:36:47.288764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.490 [2024-09-27 22:36:47.288799] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 BaseBdev1_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 [2024-09-27 22:36:47.804437] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:52.058 [2024-09-27 22:36:47.804506] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.058 [2024-09-27 22:36:47.804530] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:52.058 [2024-09-27 22:36:47.804548] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.058 [2024-09-27 22:36:47.806901] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.058 [2024-09-27 22:36:47.806941] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:52.058 BaseBdev1 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 BaseBdev2_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 [2024-09-27 22:36:47.863759] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:52.058 [2024-09-27 22:36:47.863825] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.058 [2024-09-27 22:36:47.863853] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:52.058 [2024-09-27 22:36:47.863868] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.058 [2024-09-27 22:36:47.866288] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.058 [2024-09-27 22:36:47.866328] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:52.058 BaseBdev2 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 BaseBdev3_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.058 [2024-09-27 22:36:47.926028] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:52.058 [2024-09-27 22:36:47.926085] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.058 [2024-09-27 22:36:47.926109] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:52.058 [2024-09-27 22:36:47.926124] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.058 [2024-09-27 22:36:47.928464] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.058 [2024-09-27 22:36:47.928506] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:52.058 BaseBdev3 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.058 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 BaseBdev4_malloc 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 [2024-09-27 22:36:47.988548] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:52.318 [2024-09-27 22:36:47.988606] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.318 [2024-09-27 22:36:47.988627] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:52.318 [2024-09-27 22:36:47.988641] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.318 [2024-09-27 22:36:47.991029] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.318 [2024-09-27 22:36:47.991068] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:52.318 BaseBdev4 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.318 22:36:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 spare_malloc 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 spare_delay 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 [2024-09-27 22:36:48.063144] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.318 [2024-09-27 22:36:48.063203] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.318 [2024-09-27 22:36:48.063226] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:52.318 [2024-09-27 22:36:48.063240] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.318 [2024-09-27 22:36:48.065632] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.318 [2024-09-27 22:36:48.065671] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.318 spare 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.318 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.318 [2024-09-27 22:36:48.075185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.318 [2024-09-27 22:36:48.077259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.318 [2024-09-27 22:36:48.077327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:52.318 [2024-09-27 22:36:48.077380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:52.318 [2024-09-27 22:36:48.077477] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:52.319 [2024-09-27 22:36:48.077491] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:52.319 [2024-09-27 22:36:48.077763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:52.319 [2024-09-27 22:36:48.086412] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:52.319 [2024-09-27 22:36:48.086452] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:52.319 [2024-09-27 22:36:48.086665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.319 "name": "raid_bdev1", 00:19:52.319 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:52.319 "strip_size_kb": 64, 00:19:52.319 "state": "online", 00:19:52.319 "raid_level": "raid5f", 00:19:52.319 "superblock": false, 00:19:52.319 "num_base_bdevs": 4, 00:19:52.319 "num_base_bdevs_discovered": 4, 00:19:52.319 "num_base_bdevs_operational": 4, 00:19:52.319 "base_bdevs_list": [ 00:19:52.319 { 00:19:52.319 "name": "BaseBdev1", 00:19:52.319 "uuid": "898e3eed-b690-5fab-b40a-eb1305c69dee", 00:19:52.319 "is_configured": true, 00:19:52.319 "data_offset": 0, 00:19:52.319 "data_size": 65536 00:19:52.319 }, 00:19:52.319 { 00:19:52.319 "name": "BaseBdev2", 00:19:52.319 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:52.319 "is_configured": true, 00:19:52.319 "data_offset": 0, 00:19:52.319 "data_size": 65536 00:19:52.319 }, 00:19:52.319 { 00:19:52.319 "name": "BaseBdev3", 00:19:52.319 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:52.319 "is_configured": true, 00:19:52.319 "data_offset": 0, 00:19:52.319 "data_size": 65536 00:19:52.319 }, 00:19:52.319 { 00:19:52.319 "name": "BaseBdev4", 00:19:52.319 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:52.319 "is_configured": true, 00:19:52.319 "data_offset": 0, 00:19:52.319 "data_size": 65536 00:19:52.319 } 00:19:52.319 ] 00:19:52.319 }' 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.319 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.887 [2024-09-27 22:36:48.534145] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:52.887 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:53.147 [2024-09-27 22:36:48.817596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:53.147 /dev/nbd0 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:53.147 1+0 records in 00:19:53.147 1+0 records out 00:19:53.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320749 s, 12.8 MB/s 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:53.147 22:36:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:53.716 512+0 records in 00:19:53.716 512+0 records out 00:19:53.716 100663296 bytes (101 MB, 96 MiB) copied, 0.531663 s, 189 MB/s 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:53.716 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:53.975 [2024-09-27 22:36:49.635271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.975 [2024-09-27 22:36:49.672120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.975 "name": "raid_bdev1", 00:19:53.975 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:53.975 "strip_size_kb": 64, 00:19:53.975 "state": "online", 00:19:53.975 "raid_level": "raid5f", 00:19:53.975 "superblock": false, 00:19:53.975 "num_base_bdevs": 4, 00:19:53.975 "num_base_bdevs_discovered": 3, 00:19:53.975 "num_base_bdevs_operational": 3, 00:19:53.975 "base_bdevs_list": [ 00:19:53.975 { 00:19:53.975 "name": null, 00:19:53.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.975 "is_configured": false, 00:19:53.975 "data_offset": 0, 00:19:53.975 "data_size": 65536 00:19:53.975 }, 00:19:53.975 { 00:19:53.975 "name": "BaseBdev2", 00:19:53.975 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:53.975 "is_configured": true, 00:19:53.975 "data_offset": 0, 00:19:53.975 "data_size": 65536 00:19:53.975 }, 00:19:53.975 { 00:19:53.975 "name": "BaseBdev3", 00:19:53.975 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:53.975 "is_configured": true, 00:19:53.975 "data_offset": 0, 00:19:53.975 "data_size": 65536 00:19:53.975 }, 00:19:53.975 { 00:19:53.975 "name": "BaseBdev4", 00:19:53.975 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:53.975 "is_configured": true, 00:19:53.975 "data_offset": 0, 00:19:53.975 "data_size": 65536 00:19:53.975 } 00:19:53.975 ] 00:19:53.975 }' 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.975 22:36:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.234 22:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:54.234 22:36:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.234 22:36:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.234 [2024-09-27 22:36:50.043533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.234 [2024-09-27 22:36:50.063016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:54.234 22:36:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.234 22:36:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:54.234 [2024-09-27 22:36:50.074703] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.615 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.615 "name": "raid_bdev1", 00:19:55.615 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:55.615 "strip_size_kb": 64, 00:19:55.615 "state": "online", 00:19:55.615 "raid_level": "raid5f", 00:19:55.615 "superblock": false, 00:19:55.615 "num_base_bdevs": 4, 00:19:55.615 "num_base_bdevs_discovered": 4, 00:19:55.615 "num_base_bdevs_operational": 4, 00:19:55.615 "process": { 00:19:55.615 "type": "rebuild", 00:19:55.615 "target": "spare", 00:19:55.615 "progress": { 00:19:55.615 "blocks": 19200, 00:19:55.615 "percent": 9 00:19:55.615 } 00:19:55.615 }, 00:19:55.615 "base_bdevs_list": [ 00:19:55.615 { 00:19:55.615 "name": "spare", 00:19:55.615 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:19:55.615 "is_configured": true, 00:19:55.615 "data_offset": 0, 00:19:55.615 "data_size": 65536 00:19:55.615 }, 00:19:55.615 { 00:19:55.615 "name": "BaseBdev2", 00:19:55.615 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:55.615 "is_configured": true, 00:19:55.615 "data_offset": 0, 00:19:55.615 "data_size": 65536 00:19:55.615 }, 00:19:55.616 { 00:19:55.616 "name": "BaseBdev3", 00:19:55.616 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:55.616 "is_configured": true, 00:19:55.616 "data_offset": 0, 00:19:55.616 "data_size": 65536 00:19:55.616 }, 00:19:55.616 { 00:19:55.616 "name": "BaseBdev4", 00:19:55.616 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:55.616 "is_configured": true, 00:19:55.616 "data_offset": 0, 00:19:55.616 "data_size": 65536 00:19:55.616 } 00:19:55.616 ] 00:19:55.616 }' 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.616 [2024-09-27 22:36:51.201920] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.616 [2024-09-27 22:36:51.283108] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:55.616 [2024-09-27 22:36:51.283190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.616 [2024-09-27 22:36:51.283209] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:55.616 [2024-09-27 22:36:51.283222] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.616 "name": "raid_bdev1", 00:19:55.616 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:55.616 "strip_size_kb": 64, 00:19:55.616 "state": "online", 00:19:55.616 "raid_level": "raid5f", 00:19:55.616 "superblock": false, 00:19:55.616 "num_base_bdevs": 4, 00:19:55.616 "num_base_bdevs_discovered": 3, 00:19:55.616 "num_base_bdevs_operational": 3, 00:19:55.616 "base_bdevs_list": [ 00:19:55.616 { 00:19:55.616 "name": null, 00:19:55.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.616 "is_configured": false, 00:19:55.616 "data_offset": 0, 00:19:55.616 "data_size": 65536 00:19:55.616 }, 00:19:55.616 { 00:19:55.616 "name": "BaseBdev2", 00:19:55.616 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:55.616 "is_configured": true, 00:19:55.616 "data_offset": 0, 00:19:55.616 "data_size": 65536 00:19:55.616 }, 00:19:55.616 { 00:19:55.616 "name": "BaseBdev3", 00:19:55.616 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:55.616 "is_configured": true, 00:19:55.616 "data_offset": 0, 00:19:55.616 "data_size": 65536 00:19:55.616 }, 00:19:55.616 { 00:19:55.616 "name": "BaseBdev4", 00:19:55.616 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:55.616 "is_configured": true, 00:19:55.616 "data_offset": 0, 00:19:55.616 "data_size": 65536 00:19:55.616 } 00:19:55.616 ] 00:19:55.616 }' 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.616 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.875 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.134 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.134 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.134 "name": "raid_bdev1", 00:19:56.134 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:56.134 "strip_size_kb": 64, 00:19:56.134 "state": "online", 00:19:56.135 "raid_level": "raid5f", 00:19:56.135 "superblock": false, 00:19:56.135 "num_base_bdevs": 4, 00:19:56.135 "num_base_bdevs_discovered": 3, 00:19:56.135 "num_base_bdevs_operational": 3, 00:19:56.135 "base_bdevs_list": [ 00:19:56.135 { 00:19:56.135 "name": null, 00:19:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.135 "is_configured": false, 00:19:56.135 "data_offset": 0, 00:19:56.135 "data_size": 65536 00:19:56.135 }, 00:19:56.135 { 00:19:56.135 "name": "BaseBdev2", 00:19:56.135 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:56.135 "is_configured": true, 00:19:56.135 "data_offset": 0, 00:19:56.135 "data_size": 65536 00:19:56.135 }, 00:19:56.135 { 00:19:56.135 "name": "BaseBdev3", 00:19:56.135 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:56.135 "is_configured": true, 00:19:56.135 "data_offset": 0, 00:19:56.135 "data_size": 65536 00:19:56.135 }, 00:19:56.135 { 00:19:56.135 "name": "BaseBdev4", 00:19:56.135 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:56.135 "is_configured": true, 00:19:56.135 "data_offset": 0, 00:19:56.135 "data_size": 65536 00:19:56.135 } 00:19:56.135 ] 00:19:56.135 }' 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.135 [2024-09-27 22:36:51.843029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.135 [2024-09-27 22:36:51.860816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.135 22:36:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:56.135 [2024-09-27 22:36:51.872180] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.075 "name": "raid_bdev1", 00:19:57.075 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:57.075 "strip_size_kb": 64, 00:19:57.075 "state": "online", 00:19:57.075 "raid_level": "raid5f", 00:19:57.075 "superblock": false, 00:19:57.075 "num_base_bdevs": 4, 00:19:57.075 "num_base_bdevs_discovered": 4, 00:19:57.075 "num_base_bdevs_operational": 4, 00:19:57.075 "process": { 00:19:57.075 "type": "rebuild", 00:19:57.075 "target": "spare", 00:19:57.075 "progress": { 00:19:57.075 "blocks": 17280, 00:19:57.075 "percent": 8 00:19:57.075 } 00:19:57.075 }, 00:19:57.075 "base_bdevs_list": [ 00:19:57.075 { 00:19:57.075 "name": "spare", 00:19:57.075 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:19:57.075 "is_configured": true, 00:19:57.075 "data_offset": 0, 00:19:57.075 "data_size": 65536 00:19:57.075 }, 00:19:57.075 { 00:19:57.075 "name": "BaseBdev2", 00:19:57.075 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:57.075 "is_configured": true, 00:19:57.075 "data_offset": 0, 00:19:57.075 "data_size": 65536 00:19:57.075 }, 00:19:57.075 { 00:19:57.075 "name": "BaseBdev3", 00:19:57.075 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:57.075 "is_configured": true, 00:19:57.075 "data_offset": 0, 00:19:57.075 "data_size": 65536 00:19:57.075 }, 00:19:57.075 { 00:19:57.075 "name": "BaseBdev4", 00:19:57.075 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:57.075 "is_configured": true, 00:19:57.075 "data_offset": 0, 00:19:57.075 "data_size": 65536 00:19:57.075 } 00:19:57.075 ] 00:19:57.075 }' 00:19:57.075 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.334 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.334 22:36:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=717 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.334 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.334 "name": "raid_bdev1", 00:19:57.334 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:57.334 "strip_size_kb": 64, 00:19:57.334 "state": "online", 00:19:57.334 "raid_level": "raid5f", 00:19:57.334 "superblock": false, 00:19:57.334 "num_base_bdevs": 4, 00:19:57.335 "num_base_bdevs_discovered": 4, 00:19:57.335 "num_base_bdevs_operational": 4, 00:19:57.335 "process": { 00:19:57.335 "type": "rebuild", 00:19:57.335 "target": "spare", 00:19:57.335 "progress": { 00:19:57.335 "blocks": 21120, 00:19:57.335 "percent": 10 00:19:57.335 } 00:19:57.335 }, 00:19:57.335 "base_bdevs_list": [ 00:19:57.335 { 00:19:57.335 "name": "spare", 00:19:57.335 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:19:57.335 "is_configured": true, 00:19:57.335 "data_offset": 0, 00:19:57.335 "data_size": 65536 00:19:57.335 }, 00:19:57.335 { 00:19:57.335 "name": "BaseBdev2", 00:19:57.335 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:57.335 "is_configured": true, 00:19:57.335 "data_offset": 0, 00:19:57.335 "data_size": 65536 00:19:57.335 }, 00:19:57.335 { 00:19:57.335 "name": "BaseBdev3", 00:19:57.335 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:57.335 "is_configured": true, 00:19:57.335 "data_offset": 0, 00:19:57.335 "data_size": 65536 00:19:57.335 }, 00:19:57.335 { 00:19:57.335 "name": "BaseBdev4", 00:19:57.335 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:57.335 "is_configured": true, 00:19:57.335 "data_offset": 0, 00:19:57.335 "data_size": 65536 00:19:57.335 } 00:19:57.335 ] 00:19:57.335 }' 00:19:57.335 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.335 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.335 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.335 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.335 22:36:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:58.711 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.712 "name": "raid_bdev1", 00:19:58.712 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:58.712 "strip_size_kb": 64, 00:19:58.712 "state": "online", 00:19:58.712 "raid_level": "raid5f", 00:19:58.712 "superblock": false, 00:19:58.712 "num_base_bdevs": 4, 00:19:58.712 "num_base_bdevs_discovered": 4, 00:19:58.712 "num_base_bdevs_operational": 4, 00:19:58.712 "process": { 00:19:58.712 "type": "rebuild", 00:19:58.712 "target": "spare", 00:19:58.712 "progress": { 00:19:58.712 "blocks": 42240, 00:19:58.712 "percent": 21 00:19:58.712 } 00:19:58.712 }, 00:19:58.712 "base_bdevs_list": [ 00:19:58.712 { 00:19:58.712 "name": "spare", 00:19:58.712 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:19:58.712 "is_configured": true, 00:19:58.712 "data_offset": 0, 00:19:58.712 "data_size": 65536 00:19:58.712 }, 00:19:58.712 { 00:19:58.712 "name": "BaseBdev2", 00:19:58.712 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:58.712 "is_configured": true, 00:19:58.712 "data_offset": 0, 00:19:58.712 "data_size": 65536 00:19:58.712 }, 00:19:58.712 { 00:19:58.712 "name": "BaseBdev3", 00:19:58.712 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:58.712 "is_configured": true, 00:19:58.712 "data_offset": 0, 00:19:58.712 "data_size": 65536 00:19:58.712 }, 00:19:58.712 { 00:19:58.712 "name": "BaseBdev4", 00:19:58.712 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:58.712 "is_configured": true, 00:19:58.712 "data_offset": 0, 00:19:58.712 "data_size": 65536 00:19:58.712 } 00:19:58.712 ] 00:19:58.712 }' 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.712 22:36:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.647 "name": "raid_bdev1", 00:19:59.647 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:19:59.647 "strip_size_kb": 64, 00:19:59.647 "state": "online", 00:19:59.647 "raid_level": "raid5f", 00:19:59.647 "superblock": false, 00:19:59.647 "num_base_bdevs": 4, 00:19:59.647 "num_base_bdevs_discovered": 4, 00:19:59.647 "num_base_bdevs_operational": 4, 00:19:59.647 "process": { 00:19:59.647 "type": "rebuild", 00:19:59.647 "target": "spare", 00:19:59.647 "progress": { 00:19:59.647 "blocks": 65280, 00:19:59.647 "percent": 33 00:19:59.647 } 00:19:59.647 }, 00:19:59.647 "base_bdevs_list": [ 00:19:59.647 { 00:19:59.647 "name": "spare", 00:19:59.647 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:19:59.647 "is_configured": true, 00:19:59.647 "data_offset": 0, 00:19:59.647 "data_size": 65536 00:19:59.647 }, 00:19:59.647 { 00:19:59.647 "name": "BaseBdev2", 00:19:59.647 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:19:59.647 "is_configured": true, 00:19:59.647 "data_offset": 0, 00:19:59.647 "data_size": 65536 00:19:59.647 }, 00:19:59.647 { 00:19:59.647 "name": "BaseBdev3", 00:19:59.647 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:19:59.647 "is_configured": true, 00:19:59.647 "data_offset": 0, 00:19:59.647 "data_size": 65536 00:19:59.647 }, 00:19:59.647 { 00:19:59.647 "name": "BaseBdev4", 00:19:59.647 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:19:59.647 "is_configured": true, 00:19:59.647 "data_offset": 0, 00:19:59.647 "data_size": 65536 00:19:59.647 } 00:19:59.647 ] 00:19:59.647 }' 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.647 22:36:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.586 22:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.844 "name": "raid_bdev1", 00:20:00.844 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:00.844 "strip_size_kb": 64, 00:20:00.844 "state": "online", 00:20:00.844 "raid_level": "raid5f", 00:20:00.844 "superblock": false, 00:20:00.844 "num_base_bdevs": 4, 00:20:00.844 "num_base_bdevs_discovered": 4, 00:20:00.844 "num_base_bdevs_operational": 4, 00:20:00.844 "process": { 00:20:00.844 "type": "rebuild", 00:20:00.844 "target": "spare", 00:20:00.844 "progress": { 00:20:00.844 "blocks": 86400, 00:20:00.844 "percent": 43 00:20:00.844 } 00:20:00.844 }, 00:20:00.844 "base_bdevs_list": [ 00:20:00.844 { 00:20:00.844 "name": "spare", 00:20:00.844 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:00.844 "is_configured": true, 00:20:00.844 "data_offset": 0, 00:20:00.844 "data_size": 65536 00:20:00.844 }, 00:20:00.844 { 00:20:00.844 "name": "BaseBdev2", 00:20:00.844 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:00.844 "is_configured": true, 00:20:00.844 "data_offset": 0, 00:20:00.844 "data_size": 65536 00:20:00.844 }, 00:20:00.844 { 00:20:00.844 "name": "BaseBdev3", 00:20:00.844 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:00.844 "is_configured": true, 00:20:00.844 "data_offset": 0, 00:20:00.844 "data_size": 65536 00:20:00.844 }, 00:20:00.844 { 00:20:00.844 "name": "BaseBdev4", 00:20:00.844 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:00.844 "is_configured": true, 00:20:00.844 "data_offset": 0, 00:20:00.844 "data_size": 65536 00:20:00.844 } 00:20:00.844 ] 00:20:00.844 }' 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.844 22:36:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.780 "name": "raid_bdev1", 00:20:01.780 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:01.780 "strip_size_kb": 64, 00:20:01.780 "state": "online", 00:20:01.780 "raid_level": "raid5f", 00:20:01.780 "superblock": false, 00:20:01.780 "num_base_bdevs": 4, 00:20:01.780 "num_base_bdevs_discovered": 4, 00:20:01.780 "num_base_bdevs_operational": 4, 00:20:01.780 "process": { 00:20:01.780 "type": "rebuild", 00:20:01.780 "target": "spare", 00:20:01.780 "progress": { 00:20:01.780 "blocks": 107520, 00:20:01.780 "percent": 54 00:20:01.780 } 00:20:01.780 }, 00:20:01.780 "base_bdevs_list": [ 00:20:01.780 { 00:20:01.780 "name": "spare", 00:20:01.780 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:01.780 "is_configured": true, 00:20:01.780 "data_offset": 0, 00:20:01.780 "data_size": 65536 00:20:01.780 }, 00:20:01.780 { 00:20:01.780 "name": "BaseBdev2", 00:20:01.780 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:01.780 "is_configured": true, 00:20:01.780 "data_offset": 0, 00:20:01.780 "data_size": 65536 00:20:01.780 }, 00:20:01.780 { 00:20:01.780 "name": "BaseBdev3", 00:20:01.780 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:01.780 "is_configured": true, 00:20:01.780 "data_offset": 0, 00:20:01.780 "data_size": 65536 00:20:01.780 }, 00:20:01.780 { 00:20:01.780 "name": "BaseBdev4", 00:20:01.780 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:01.780 "is_configured": true, 00:20:01.780 "data_offset": 0, 00:20:01.780 "data_size": 65536 00:20:01.780 } 00:20:01.780 ] 00:20:01.780 }' 00:20:01.780 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.039 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.039 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.039 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.039 22:36:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.974 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.974 "name": "raid_bdev1", 00:20:02.974 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:02.974 "strip_size_kb": 64, 00:20:02.974 "state": "online", 00:20:02.974 "raid_level": "raid5f", 00:20:02.974 "superblock": false, 00:20:02.974 "num_base_bdevs": 4, 00:20:02.974 "num_base_bdevs_discovered": 4, 00:20:02.974 "num_base_bdevs_operational": 4, 00:20:02.974 "process": { 00:20:02.974 "type": "rebuild", 00:20:02.974 "target": "spare", 00:20:02.974 "progress": { 00:20:02.974 "blocks": 130560, 00:20:02.974 "percent": 66 00:20:02.974 } 00:20:02.974 }, 00:20:02.974 "base_bdevs_list": [ 00:20:02.974 { 00:20:02.974 "name": "spare", 00:20:02.974 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:02.974 "is_configured": true, 00:20:02.974 "data_offset": 0, 00:20:02.974 "data_size": 65536 00:20:02.974 }, 00:20:02.974 { 00:20:02.974 "name": "BaseBdev2", 00:20:02.974 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:02.974 "is_configured": true, 00:20:02.974 "data_offset": 0, 00:20:02.974 "data_size": 65536 00:20:02.974 }, 00:20:02.974 { 00:20:02.974 "name": "BaseBdev3", 00:20:02.974 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:02.974 "is_configured": true, 00:20:02.974 "data_offset": 0, 00:20:02.974 "data_size": 65536 00:20:02.974 }, 00:20:02.974 { 00:20:02.974 "name": "BaseBdev4", 00:20:02.974 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:02.974 "is_configured": true, 00:20:02.974 "data_offset": 0, 00:20:02.974 "data_size": 65536 00:20:02.974 } 00:20:02.974 ] 00:20:02.974 }' 00:20:02.975 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.975 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.975 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.233 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:03.233 22:36:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.172 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.172 "name": "raid_bdev1", 00:20:04.172 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:04.172 "strip_size_kb": 64, 00:20:04.172 "state": "online", 00:20:04.172 "raid_level": "raid5f", 00:20:04.172 "superblock": false, 00:20:04.172 "num_base_bdevs": 4, 00:20:04.172 "num_base_bdevs_discovered": 4, 00:20:04.172 "num_base_bdevs_operational": 4, 00:20:04.172 "process": { 00:20:04.172 "type": "rebuild", 00:20:04.172 "target": "spare", 00:20:04.172 "progress": { 00:20:04.172 "blocks": 151680, 00:20:04.172 "percent": 77 00:20:04.172 } 00:20:04.172 }, 00:20:04.172 "base_bdevs_list": [ 00:20:04.172 { 00:20:04.172 "name": "spare", 00:20:04.172 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:04.172 "is_configured": true, 00:20:04.172 "data_offset": 0, 00:20:04.172 "data_size": 65536 00:20:04.172 }, 00:20:04.172 { 00:20:04.173 "name": "BaseBdev2", 00:20:04.173 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:04.173 "is_configured": true, 00:20:04.173 "data_offset": 0, 00:20:04.173 "data_size": 65536 00:20:04.173 }, 00:20:04.173 { 00:20:04.173 "name": "BaseBdev3", 00:20:04.173 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:04.173 "is_configured": true, 00:20:04.173 "data_offset": 0, 00:20:04.173 "data_size": 65536 00:20:04.173 }, 00:20:04.173 { 00:20:04.173 "name": "BaseBdev4", 00:20:04.173 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:04.173 "is_configured": true, 00:20:04.173 "data_offset": 0, 00:20:04.173 "data_size": 65536 00:20:04.173 } 00:20:04.173 ] 00:20:04.173 }' 00:20:04.173 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.173 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.173 22:36:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.173 22:37:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.173 22:37:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.549 "name": "raid_bdev1", 00:20:05.549 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:05.549 "strip_size_kb": 64, 00:20:05.549 "state": "online", 00:20:05.549 "raid_level": "raid5f", 00:20:05.549 "superblock": false, 00:20:05.549 "num_base_bdevs": 4, 00:20:05.549 "num_base_bdevs_discovered": 4, 00:20:05.549 "num_base_bdevs_operational": 4, 00:20:05.549 "process": { 00:20:05.549 "type": "rebuild", 00:20:05.549 "target": "spare", 00:20:05.549 "progress": { 00:20:05.549 "blocks": 174720, 00:20:05.549 "percent": 88 00:20:05.549 } 00:20:05.549 }, 00:20:05.549 "base_bdevs_list": [ 00:20:05.549 { 00:20:05.549 "name": "spare", 00:20:05.549 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:05.549 "is_configured": true, 00:20:05.549 "data_offset": 0, 00:20:05.549 "data_size": 65536 00:20:05.549 }, 00:20:05.549 { 00:20:05.549 "name": "BaseBdev2", 00:20:05.549 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:05.549 "is_configured": true, 00:20:05.549 "data_offset": 0, 00:20:05.549 "data_size": 65536 00:20:05.549 }, 00:20:05.549 { 00:20:05.549 "name": "BaseBdev3", 00:20:05.549 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:05.549 "is_configured": true, 00:20:05.549 "data_offset": 0, 00:20:05.549 "data_size": 65536 00:20:05.549 }, 00:20:05.549 { 00:20:05.549 "name": "BaseBdev4", 00:20:05.549 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:05.549 "is_configured": true, 00:20:05.549 "data_offset": 0, 00:20:05.549 "data_size": 65536 00:20:05.549 } 00:20:05.549 ] 00:20:05.549 }' 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.549 22:37:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.488 "name": "raid_bdev1", 00:20:06.488 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:06.488 "strip_size_kb": 64, 00:20:06.488 "state": "online", 00:20:06.488 "raid_level": "raid5f", 00:20:06.488 "superblock": false, 00:20:06.488 "num_base_bdevs": 4, 00:20:06.488 "num_base_bdevs_discovered": 4, 00:20:06.488 "num_base_bdevs_operational": 4, 00:20:06.488 "process": { 00:20:06.488 "type": "rebuild", 00:20:06.488 "target": "spare", 00:20:06.488 "progress": { 00:20:06.488 "blocks": 195840, 00:20:06.488 "percent": 99 00:20:06.488 } 00:20:06.488 }, 00:20:06.488 "base_bdevs_list": [ 00:20:06.488 { 00:20:06.488 "name": "spare", 00:20:06.488 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:06.488 "is_configured": true, 00:20:06.488 "data_offset": 0, 00:20:06.488 "data_size": 65536 00:20:06.488 }, 00:20:06.488 { 00:20:06.488 "name": "BaseBdev2", 00:20:06.488 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:06.488 "is_configured": true, 00:20:06.488 "data_offset": 0, 00:20:06.488 "data_size": 65536 00:20:06.488 }, 00:20:06.488 { 00:20:06.488 "name": "BaseBdev3", 00:20:06.488 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:06.488 "is_configured": true, 00:20:06.488 "data_offset": 0, 00:20:06.488 "data_size": 65536 00:20:06.488 }, 00:20:06.488 { 00:20:06.488 "name": "BaseBdev4", 00:20:06.488 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:06.488 "is_configured": true, 00:20:06.488 "data_offset": 0, 00:20:06.488 "data_size": 65536 00:20:06.488 } 00:20:06.488 ] 00:20:06.488 }' 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.488 [2024-09-27 22:37:02.237143] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:06.488 [2024-09-27 22:37:02.237215] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:06.488 [2024-09-27 22:37:02.237272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.488 22:37:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.865 "name": "raid_bdev1", 00:20:07.865 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:07.865 "strip_size_kb": 64, 00:20:07.865 "state": "online", 00:20:07.865 "raid_level": "raid5f", 00:20:07.865 "superblock": false, 00:20:07.865 "num_base_bdevs": 4, 00:20:07.865 "num_base_bdevs_discovered": 4, 00:20:07.865 "num_base_bdevs_operational": 4, 00:20:07.865 "base_bdevs_list": [ 00:20:07.865 { 00:20:07.865 "name": "spare", 00:20:07.865 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev2", 00:20:07.865 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev3", 00:20:07.865 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev4", 00:20:07.865 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 } 00:20:07.865 ] 00:20:07.865 }' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.865 "name": "raid_bdev1", 00:20:07.865 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:07.865 "strip_size_kb": 64, 00:20:07.865 "state": "online", 00:20:07.865 "raid_level": "raid5f", 00:20:07.865 "superblock": false, 00:20:07.865 "num_base_bdevs": 4, 00:20:07.865 "num_base_bdevs_discovered": 4, 00:20:07.865 "num_base_bdevs_operational": 4, 00:20:07.865 "base_bdevs_list": [ 00:20:07.865 { 00:20:07.865 "name": "spare", 00:20:07.865 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev2", 00:20:07.865 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev3", 00:20:07.865 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 }, 00:20:07.865 { 00:20:07.865 "name": "BaseBdev4", 00:20:07.865 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:07.865 "is_configured": true, 00:20:07.865 "data_offset": 0, 00:20:07.865 "data_size": 65536 00:20:07.865 } 00:20:07.865 ] 00:20:07.865 }' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.865 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.866 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.866 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.866 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.866 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.866 "name": "raid_bdev1", 00:20:07.866 "uuid": "9aeb2cfe-0063-4f56-a2b2-9d02a7ead56c", 00:20:07.866 "strip_size_kb": 64, 00:20:07.866 "state": "online", 00:20:07.866 "raid_level": "raid5f", 00:20:07.866 "superblock": false, 00:20:07.866 "num_base_bdevs": 4, 00:20:07.866 "num_base_bdevs_discovered": 4, 00:20:07.866 "num_base_bdevs_operational": 4, 00:20:07.866 "base_bdevs_list": [ 00:20:07.866 { 00:20:07.866 "name": "spare", 00:20:07.866 "uuid": "2c4d0f6a-354d-513a-bca1-1e99f55f4940", 00:20:07.866 "is_configured": true, 00:20:07.866 "data_offset": 0, 00:20:07.866 "data_size": 65536 00:20:07.866 }, 00:20:07.866 { 00:20:07.866 "name": "BaseBdev2", 00:20:07.866 "uuid": "5bc9baeb-b2ce-5e56-a043-c296306d3c8a", 00:20:07.866 "is_configured": true, 00:20:07.866 "data_offset": 0, 00:20:07.866 "data_size": 65536 00:20:07.866 }, 00:20:07.866 { 00:20:07.866 "name": "BaseBdev3", 00:20:07.866 "uuid": "7f12f17e-0857-511f-816d-558f4dad443c", 00:20:07.866 "is_configured": true, 00:20:07.866 "data_offset": 0, 00:20:07.866 "data_size": 65536 00:20:07.866 }, 00:20:07.866 { 00:20:07.866 "name": "BaseBdev4", 00:20:07.866 "uuid": "f5aecbfc-42cc-590f-aa75-bbd13afc3dc0", 00:20:07.866 "is_configured": true, 00:20:07.866 "data_offset": 0, 00:20:07.866 "data_size": 65536 00:20:07.866 } 00:20:07.866 ] 00:20:07.866 }' 00:20:07.866 22:37:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.866 22:37:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.466 [2024-09-27 22:37:04.020121] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.466 [2024-09-27 22:37:04.020156] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.466 [2024-09-27 22:37:04.020257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.466 [2024-09-27 22:37:04.020351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.466 [2024-09-27 22:37:04.020364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:08.466 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:08.466 /dev/nbd0 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:08.724 1+0 records in 00:20:08.724 1+0 records out 00:20:08.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346834 s, 11.8 MB/s 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:08.724 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:08.983 /dev/nbd1 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:08.983 1+0 records in 00:20:08.983 1+0 records out 00:20:08.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529943 s, 7.7 MB/s 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:08.983 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:09.242 22:37:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:09.242 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85731 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85731 ']' 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85731 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.500 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85731 00:20:09.758 killing process with pid 85731 00:20:09.758 Received shutdown signal, test time was about 60.000000 seconds 00:20:09.758 00:20:09.758 Latency(us) 00:20:09.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.758 =================================================================================================================== 00:20:09.758 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.758 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:09.758 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:09.758 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85731' 00:20:09.758 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 85731 00:20:09.759 [2024-09-27 22:37:05.386532] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.759 22:37:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 85731 00:20:10.325 [2024-09-27 22:37:05.898549] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:12.235 00:20:12.235 real 0m21.292s 00:20:12.235 user 0m24.991s 00:20:12.235 sys 0m2.690s 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.235 ************************************ 00:20:12.235 END TEST raid5f_rebuild_test 00:20:12.235 ************************************ 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.235 22:37:07 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:20:12.235 22:37:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:12.235 22:37:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.235 22:37:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.235 ************************************ 00:20:12.235 START TEST raid5f_rebuild_test_sb 00:20:12.235 ************************************ 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86259 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86259 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86259 ']' 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.235 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.236 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.236 22:37:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.236 [2024-09-27 22:37:08.040304] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:20:12.236 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:12.236 Zero copy mechanism will not be used. 00:20:12.236 [2024-09-27 22:37:08.040605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86259 ] 00:20:12.506 [2024-09-27 22:37:08.204496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.764 [2024-09-27 22:37:08.432658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.022 [2024-09-27 22:37:08.659377] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.022 [2024-09-27 22:37:08.659628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.280 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.281 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:13.281 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.281 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:13.281 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.281 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.538 BaseBdev1_malloc 00:20:13.538 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.538 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:13.538 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.538 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.538 [2024-09-27 22:37:09.191747] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:13.538 [2024-09-27 22:37:09.191833] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.538 [2024-09-27 22:37:09.191860] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:13.538 [2024-09-27 22:37:09.191878] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.538 [2024-09-27 22:37:09.194395] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.538 [2024-09-27 22:37:09.194441] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:13.538 BaseBdev1 00:20:13.538 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.538 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 BaseBdev2_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 [2024-09-27 22:37:09.254133] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:13.539 [2024-09-27 22:37:09.254367] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.539 [2024-09-27 22:37:09.254411] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:13.539 [2024-09-27 22:37:09.254425] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.539 [2024-09-27 22:37:09.256911] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.539 [2024-09-27 22:37:09.256956] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:13.539 BaseBdev2 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 BaseBdev3_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 [2024-09-27 22:37:09.317077] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:13.539 [2024-09-27 22:37:09.317144] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.539 [2024-09-27 22:37:09.317170] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:13.539 [2024-09-27 22:37:09.317184] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.539 [2024-09-27 22:37:09.319568] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.539 [2024-09-27 22:37:09.319614] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:13.539 BaseBdev3 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 BaseBdev4_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.539 [2024-09-27 22:37:09.378830] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:13.539 [2024-09-27 22:37:09.378893] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.539 [2024-09-27 22:37:09.378914] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:13.539 [2024-09-27 22:37:09.378928] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.539 [2024-09-27 22:37:09.381576] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.539 [2024-09-27 22:37:09.381711] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:13.539 BaseBdev4 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.539 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.796 spare_malloc 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.796 spare_delay 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.796 [2024-09-27 22:37:09.453687] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:13.796 [2024-09-27 22:37:09.453750] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.796 [2024-09-27 22:37:09.453773] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:13.796 [2024-09-27 22:37:09.453786] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.796 [2024-09-27 22:37:09.456179] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.796 [2024-09-27 22:37:09.456335] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:13.796 spare 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.796 [2024-09-27 22:37:09.465736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.796 [2024-09-27 22:37:09.467832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.796 [2024-09-27 22:37:09.468042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:13.796 [2024-09-27 22:37:09.468105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:13.796 [2024-09-27 22:37:09.468316] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:13.796 [2024-09-27 22:37:09.468330] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:13.796 [2024-09-27 22:37:09.468606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:13.796 [2024-09-27 22:37:09.477713] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:13.796 [2024-09-27 22:37:09.477735] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:13.796 [2024-09-27 22:37:09.477960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.796 "name": "raid_bdev1", 00:20:13.796 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:13.796 "strip_size_kb": 64, 00:20:13.796 "state": "online", 00:20:13.796 "raid_level": "raid5f", 00:20:13.796 "superblock": true, 00:20:13.796 "num_base_bdevs": 4, 00:20:13.796 "num_base_bdevs_discovered": 4, 00:20:13.796 "num_base_bdevs_operational": 4, 00:20:13.796 "base_bdevs_list": [ 00:20:13.796 { 00:20:13.796 "name": "BaseBdev1", 00:20:13.796 "uuid": "c0a21245-f55e-56dd-9922-e70c2cbd8b2a", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 2048, 00:20:13.796 "data_size": 63488 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev2", 00:20:13.796 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 2048, 00:20:13.796 "data_size": 63488 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev3", 00:20:13.796 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 2048, 00:20:13.796 "data_size": 63488 00:20:13.796 }, 00:20:13.796 { 00:20:13.796 "name": "BaseBdev4", 00:20:13.796 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:13.796 "is_configured": true, 00:20:13.796 "data_offset": 2048, 00:20:13.796 "data_size": 63488 00:20:13.796 } 00:20:13.796 ] 00:20:13.796 }' 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.796 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.055 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:14.055 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:14.055 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.055 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.055 [2024-09-27 22:37:09.901766] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.055 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.312 22:37:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:14.312 [2024-09-27 22:37:10.181228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:14.569 /dev/nbd0 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.569 1+0 records in 00:20:14.569 1+0 records out 00:20:14.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271614 s, 15.1 MB/s 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:14.569 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:20:15.134 496+0 records in 00:20:15.134 496+0 records out 00:20:15.134 97517568 bytes (98 MB, 93 MiB) copied, 0.495928 s, 197 MB/s 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:15.134 [2024-09-27 22:37:10.970531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.134 [2024-09-27 22:37:10.987248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.134 22:37:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.392 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.392 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.392 "name": "raid_bdev1", 00:20:15.392 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:15.392 "strip_size_kb": 64, 00:20:15.392 "state": "online", 00:20:15.392 "raid_level": "raid5f", 00:20:15.392 "superblock": true, 00:20:15.392 "num_base_bdevs": 4, 00:20:15.392 "num_base_bdevs_discovered": 3, 00:20:15.392 "num_base_bdevs_operational": 3, 00:20:15.392 "base_bdevs_list": [ 00:20:15.392 { 00:20:15.392 "name": null, 00:20:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.392 "is_configured": false, 00:20:15.392 "data_offset": 0, 00:20:15.392 "data_size": 63488 00:20:15.392 }, 00:20:15.392 { 00:20:15.392 "name": "BaseBdev2", 00:20:15.392 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:15.392 "is_configured": true, 00:20:15.392 "data_offset": 2048, 00:20:15.392 "data_size": 63488 00:20:15.392 }, 00:20:15.392 { 00:20:15.392 "name": "BaseBdev3", 00:20:15.392 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:15.392 "is_configured": true, 00:20:15.392 "data_offset": 2048, 00:20:15.392 "data_size": 63488 00:20:15.392 }, 00:20:15.392 { 00:20:15.392 "name": "BaseBdev4", 00:20:15.392 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:15.392 "is_configured": true, 00:20:15.392 "data_offset": 2048, 00:20:15.392 "data_size": 63488 00:20:15.392 } 00:20:15.392 ] 00:20:15.392 }' 00:20:15.392 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.392 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.650 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.650 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.650 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.650 [2024-09-27 22:37:11.434754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.650 [2024-09-27 22:37:11.451749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:20:15.650 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.650 22:37:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:15.650 [2024-09-27 22:37:11.463393] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.597 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.857 "name": "raid_bdev1", 00:20:16.857 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:16.857 "strip_size_kb": 64, 00:20:16.857 "state": "online", 00:20:16.857 "raid_level": "raid5f", 00:20:16.857 "superblock": true, 00:20:16.857 "num_base_bdevs": 4, 00:20:16.857 "num_base_bdevs_discovered": 4, 00:20:16.857 "num_base_bdevs_operational": 4, 00:20:16.857 "process": { 00:20:16.857 "type": "rebuild", 00:20:16.857 "target": "spare", 00:20:16.857 "progress": { 00:20:16.857 "blocks": 19200, 00:20:16.857 "percent": 10 00:20:16.857 } 00:20:16.857 }, 00:20:16.857 "base_bdevs_list": [ 00:20:16.857 { 00:20:16.857 "name": "spare", 00:20:16.857 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:16.857 "is_configured": true, 00:20:16.857 "data_offset": 2048, 00:20:16.857 "data_size": 63488 00:20:16.857 }, 00:20:16.857 { 00:20:16.857 "name": "BaseBdev2", 00:20:16.857 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:16.857 "is_configured": true, 00:20:16.857 "data_offset": 2048, 00:20:16.857 "data_size": 63488 00:20:16.857 }, 00:20:16.857 { 00:20:16.857 "name": "BaseBdev3", 00:20:16.857 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:16.857 "is_configured": true, 00:20:16.857 "data_offset": 2048, 00:20:16.857 "data_size": 63488 00:20:16.857 }, 00:20:16.857 { 00:20:16.857 "name": "BaseBdev4", 00:20:16.857 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:16.857 "is_configured": true, 00:20:16.857 "data_offset": 2048, 00:20:16.857 "data_size": 63488 00:20:16.857 } 00:20:16.857 ] 00:20:16.857 }' 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.857 [2024-09-27 22:37:12.610575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.857 [2024-09-27 22:37:12.670621] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:16.857 [2024-09-27 22:37:12.670702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.857 [2024-09-27 22:37:12.670721] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.857 [2024-09-27 22:37:12.670733] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.857 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.117 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.117 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.117 "name": "raid_bdev1", 00:20:17.117 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:17.117 "strip_size_kb": 64, 00:20:17.117 "state": "online", 00:20:17.117 "raid_level": "raid5f", 00:20:17.117 "superblock": true, 00:20:17.117 "num_base_bdevs": 4, 00:20:17.117 "num_base_bdevs_discovered": 3, 00:20:17.117 "num_base_bdevs_operational": 3, 00:20:17.117 "base_bdevs_list": [ 00:20:17.117 { 00:20:17.117 "name": null, 00:20:17.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.117 "is_configured": false, 00:20:17.117 "data_offset": 0, 00:20:17.117 "data_size": 63488 00:20:17.117 }, 00:20:17.117 { 00:20:17.117 "name": "BaseBdev2", 00:20:17.117 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:17.117 "is_configured": true, 00:20:17.117 "data_offset": 2048, 00:20:17.117 "data_size": 63488 00:20:17.117 }, 00:20:17.117 { 00:20:17.117 "name": "BaseBdev3", 00:20:17.117 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:17.117 "is_configured": true, 00:20:17.117 "data_offset": 2048, 00:20:17.117 "data_size": 63488 00:20:17.117 }, 00:20:17.117 { 00:20:17.117 "name": "BaseBdev4", 00:20:17.117 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:17.117 "is_configured": true, 00:20:17.117 "data_offset": 2048, 00:20:17.117 "data_size": 63488 00:20:17.117 } 00:20:17.117 ] 00:20:17.117 }' 00:20:17.117 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.117 22:37:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.375 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.375 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.376 "name": "raid_bdev1", 00:20:17.376 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:17.376 "strip_size_kb": 64, 00:20:17.376 "state": "online", 00:20:17.376 "raid_level": "raid5f", 00:20:17.376 "superblock": true, 00:20:17.376 "num_base_bdevs": 4, 00:20:17.376 "num_base_bdevs_discovered": 3, 00:20:17.376 "num_base_bdevs_operational": 3, 00:20:17.376 "base_bdevs_list": [ 00:20:17.376 { 00:20:17.376 "name": null, 00:20:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.376 "is_configured": false, 00:20:17.376 "data_offset": 0, 00:20:17.376 "data_size": 63488 00:20:17.376 }, 00:20:17.376 { 00:20:17.376 "name": "BaseBdev2", 00:20:17.376 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:17.376 "is_configured": true, 00:20:17.376 "data_offset": 2048, 00:20:17.376 "data_size": 63488 00:20:17.376 }, 00:20:17.376 { 00:20:17.376 "name": "BaseBdev3", 00:20:17.376 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:17.376 "is_configured": true, 00:20:17.376 "data_offset": 2048, 00:20:17.376 "data_size": 63488 00:20:17.376 }, 00:20:17.376 { 00:20:17.376 "name": "BaseBdev4", 00:20:17.376 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:17.376 "is_configured": true, 00:20:17.376 "data_offset": 2048, 00:20:17.376 "data_size": 63488 00:20:17.376 } 00:20:17.376 ] 00:20:17.376 }' 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.376 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.634 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.634 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.634 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.634 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.634 [2024-09-27 22:37:13.301163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.634 [2024-09-27 22:37:13.319001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:20:17.634 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.634 22:37:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:17.634 [2024-09-27 22:37:13.329821] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.569 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.569 "name": "raid_bdev1", 00:20:18.569 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:18.569 "strip_size_kb": 64, 00:20:18.569 "state": "online", 00:20:18.569 "raid_level": "raid5f", 00:20:18.569 "superblock": true, 00:20:18.569 "num_base_bdevs": 4, 00:20:18.569 "num_base_bdevs_discovered": 4, 00:20:18.569 "num_base_bdevs_operational": 4, 00:20:18.569 "process": { 00:20:18.569 "type": "rebuild", 00:20:18.569 "target": "spare", 00:20:18.569 "progress": { 00:20:18.569 "blocks": 19200, 00:20:18.569 "percent": 10 00:20:18.569 } 00:20:18.569 }, 00:20:18.569 "base_bdevs_list": [ 00:20:18.569 { 00:20:18.569 "name": "spare", 00:20:18.569 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:18.569 "is_configured": true, 00:20:18.569 "data_offset": 2048, 00:20:18.569 "data_size": 63488 00:20:18.569 }, 00:20:18.569 { 00:20:18.569 "name": "BaseBdev2", 00:20:18.569 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:18.569 "is_configured": true, 00:20:18.569 "data_offset": 2048, 00:20:18.569 "data_size": 63488 00:20:18.569 }, 00:20:18.569 { 00:20:18.569 "name": "BaseBdev3", 00:20:18.569 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:18.569 "is_configured": true, 00:20:18.569 "data_offset": 2048, 00:20:18.569 "data_size": 63488 00:20:18.569 }, 00:20:18.569 { 00:20:18.569 "name": "BaseBdev4", 00:20:18.569 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:18.569 "is_configured": true, 00:20:18.570 "data_offset": 2048, 00:20:18.570 "data_size": 63488 00:20:18.570 } 00:20:18.570 ] 00:20:18.570 }' 00:20:18.570 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.570 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.570 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:18.829 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=738 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.829 "name": "raid_bdev1", 00:20:18.829 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:18.829 "strip_size_kb": 64, 00:20:18.829 "state": "online", 00:20:18.829 "raid_level": "raid5f", 00:20:18.829 "superblock": true, 00:20:18.829 "num_base_bdevs": 4, 00:20:18.829 "num_base_bdevs_discovered": 4, 00:20:18.829 "num_base_bdevs_operational": 4, 00:20:18.829 "process": { 00:20:18.829 "type": "rebuild", 00:20:18.829 "target": "spare", 00:20:18.829 "progress": { 00:20:18.829 "blocks": 21120, 00:20:18.829 "percent": 11 00:20:18.829 } 00:20:18.829 }, 00:20:18.829 "base_bdevs_list": [ 00:20:18.829 { 00:20:18.829 "name": "spare", 00:20:18.829 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:18.829 "is_configured": true, 00:20:18.829 "data_offset": 2048, 00:20:18.829 "data_size": 63488 00:20:18.829 }, 00:20:18.829 { 00:20:18.829 "name": "BaseBdev2", 00:20:18.829 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:18.829 "is_configured": true, 00:20:18.829 "data_offset": 2048, 00:20:18.829 "data_size": 63488 00:20:18.829 }, 00:20:18.829 { 00:20:18.829 "name": "BaseBdev3", 00:20:18.829 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:18.829 "is_configured": true, 00:20:18.829 "data_offset": 2048, 00:20:18.829 "data_size": 63488 00:20:18.829 }, 00:20:18.829 { 00:20:18.829 "name": "BaseBdev4", 00:20:18.829 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:18.829 "is_configured": true, 00:20:18.829 "data_offset": 2048, 00:20:18.829 "data_size": 63488 00:20:18.829 } 00:20:18.829 ] 00:20:18.829 }' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.829 22:37:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.766 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.766 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.766 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.767 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.025 "name": "raid_bdev1", 00:20:20.025 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:20.025 "strip_size_kb": 64, 00:20:20.025 "state": "online", 00:20:20.025 "raid_level": "raid5f", 00:20:20.025 "superblock": true, 00:20:20.025 "num_base_bdevs": 4, 00:20:20.025 "num_base_bdevs_discovered": 4, 00:20:20.025 "num_base_bdevs_operational": 4, 00:20:20.025 "process": { 00:20:20.025 "type": "rebuild", 00:20:20.025 "target": "spare", 00:20:20.025 "progress": { 00:20:20.025 "blocks": 42240, 00:20:20.025 "percent": 22 00:20:20.025 } 00:20:20.025 }, 00:20:20.025 "base_bdevs_list": [ 00:20:20.025 { 00:20:20.025 "name": "spare", 00:20:20.025 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:20.025 "is_configured": true, 00:20:20.025 "data_offset": 2048, 00:20:20.025 "data_size": 63488 00:20:20.025 }, 00:20:20.025 { 00:20:20.025 "name": "BaseBdev2", 00:20:20.025 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:20.025 "is_configured": true, 00:20:20.025 "data_offset": 2048, 00:20:20.025 "data_size": 63488 00:20:20.025 }, 00:20:20.025 { 00:20:20.025 "name": "BaseBdev3", 00:20:20.025 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:20.025 "is_configured": true, 00:20:20.025 "data_offset": 2048, 00:20:20.025 "data_size": 63488 00:20:20.025 }, 00:20:20.025 { 00:20:20.025 "name": "BaseBdev4", 00:20:20.025 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:20.025 "is_configured": true, 00:20:20.025 "data_offset": 2048, 00:20:20.025 "data_size": 63488 00:20:20.025 } 00:20:20.025 ] 00:20:20.025 }' 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.025 22:37:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.958 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.958 "name": "raid_bdev1", 00:20:20.958 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:20.958 "strip_size_kb": 64, 00:20:20.958 "state": "online", 00:20:20.958 "raid_level": "raid5f", 00:20:20.958 "superblock": true, 00:20:20.958 "num_base_bdevs": 4, 00:20:20.958 "num_base_bdevs_discovered": 4, 00:20:20.958 "num_base_bdevs_operational": 4, 00:20:20.958 "process": { 00:20:20.958 "type": "rebuild", 00:20:20.958 "target": "spare", 00:20:20.958 "progress": { 00:20:20.958 "blocks": 65280, 00:20:20.958 "percent": 34 00:20:20.958 } 00:20:20.958 }, 00:20:20.958 "base_bdevs_list": [ 00:20:20.958 { 00:20:20.958 "name": "spare", 00:20:20.958 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:20.958 "is_configured": true, 00:20:20.958 "data_offset": 2048, 00:20:20.958 "data_size": 63488 00:20:20.958 }, 00:20:20.958 { 00:20:20.959 "name": "BaseBdev2", 00:20:20.959 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:20.959 "is_configured": true, 00:20:20.959 "data_offset": 2048, 00:20:20.959 "data_size": 63488 00:20:20.959 }, 00:20:20.959 { 00:20:20.959 "name": "BaseBdev3", 00:20:20.959 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:20.959 "is_configured": true, 00:20:20.959 "data_offset": 2048, 00:20:20.959 "data_size": 63488 00:20:20.959 }, 00:20:20.959 { 00:20:20.959 "name": "BaseBdev4", 00:20:20.959 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:20.959 "is_configured": true, 00:20:20.959 "data_offset": 2048, 00:20:20.959 "data_size": 63488 00:20:20.959 } 00:20:20.959 ] 00:20:20.959 }' 00:20:20.959 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.959 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.218 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.218 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.218 22:37:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.155 "name": "raid_bdev1", 00:20:22.155 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:22.155 "strip_size_kb": 64, 00:20:22.155 "state": "online", 00:20:22.155 "raid_level": "raid5f", 00:20:22.155 "superblock": true, 00:20:22.155 "num_base_bdevs": 4, 00:20:22.155 "num_base_bdevs_discovered": 4, 00:20:22.155 "num_base_bdevs_operational": 4, 00:20:22.155 "process": { 00:20:22.155 "type": "rebuild", 00:20:22.155 "target": "spare", 00:20:22.155 "progress": { 00:20:22.155 "blocks": 86400, 00:20:22.155 "percent": 45 00:20:22.155 } 00:20:22.155 }, 00:20:22.155 "base_bdevs_list": [ 00:20:22.155 { 00:20:22.155 "name": "spare", 00:20:22.155 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:22.155 "is_configured": true, 00:20:22.155 "data_offset": 2048, 00:20:22.155 "data_size": 63488 00:20:22.155 }, 00:20:22.155 { 00:20:22.155 "name": "BaseBdev2", 00:20:22.155 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:22.155 "is_configured": true, 00:20:22.155 "data_offset": 2048, 00:20:22.155 "data_size": 63488 00:20:22.155 }, 00:20:22.155 { 00:20:22.155 "name": "BaseBdev3", 00:20:22.155 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:22.155 "is_configured": true, 00:20:22.155 "data_offset": 2048, 00:20:22.155 "data_size": 63488 00:20:22.155 }, 00:20:22.155 { 00:20:22.155 "name": "BaseBdev4", 00:20:22.155 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:22.155 "is_configured": true, 00:20:22.155 "data_offset": 2048, 00:20:22.155 "data_size": 63488 00:20:22.155 } 00:20:22.155 ] 00:20:22.155 }' 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.155 22:37:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.155 22:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.155 22:37:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:23.582 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.582 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.582 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.582 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.582 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.583 "name": "raid_bdev1", 00:20:23.583 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:23.583 "strip_size_kb": 64, 00:20:23.583 "state": "online", 00:20:23.583 "raid_level": "raid5f", 00:20:23.583 "superblock": true, 00:20:23.583 "num_base_bdevs": 4, 00:20:23.583 "num_base_bdevs_discovered": 4, 00:20:23.583 "num_base_bdevs_operational": 4, 00:20:23.583 "process": { 00:20:23.583 "type": "rebuild", 00:20:23.583 "target": "spare", 00:20:23.583 "progress": { 00:20:23.583 "blocks": 107520, 00:20:23.583 "percent": 56 00:20:23.583 } 00:20:23.583 }, 00:20:23.583 "base_bdevs_list": [ 00:20:23.583 { 00:20:23.583 "name": "spare", 00:20:23.583 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:23.583 "is_configured": true, 00:20:23.583 "data_offset": 2048, 00:20:23.583 "data_size": 63488 00:20:23.583 }, 00:20:23.583 { 00:20:23.583 "name": "BaseBdev2", 00:20:23.583 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:23.583 "is_configured": true, 00:20:23.583 "data_offset": 2048, 00:20:23.583 "data_size": 63488 00:20:23.583 }, 00:20:23.583 { 00:20:23.583 "name": "BaseBdev3", 00:20:23.583 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:23.583 "is_configured": true, 00:20:23.583 "data_offset": 2048, 00:20:23.583 "data_size": 63488 00:20:23.583 }, 00:20:23.583 { 00:20:23.583 "name": "BaseBdev4", 00:20:23.583 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:23.583 "is_configured": true, 00:20:23.583 "data_offset": 2048, 00:20:23.583 "data_size": 63488 00:20:23.583 } 00:20:23.583 ] 00:20:23.583 }' 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.583 22:37:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.521 "name": "raid_bdev1", 00:20:24.521 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:24.521 "strip_size_kb": 64, 00:20:24.521 "state": "online", 00:20:24.521 "raid_level": "raid5f", 00:20:24.521 "superblock": true, 00:20:24.521 "num_base_bdevs": 4, 00:20:24.521 "num_base_bdevs_discovered": 4, 00:20:24.521 "num_base_bdevs_operational": 4, 00:20:24.521 "process": { 00:20:24.521 "type": "rebuild", 00:20:24.521 "target": "spare", 00:20:24.521 "progress": { 00:20:24.521 "blocks": 130560, 00:20:24.521 "percent": 68 00:20:24.521 } 00:20:24.521 }, 00:20:24.521 "base_bdevs_list": [ 00:20:24.521 { 00:20:24.521 "name": "spare", 00:20:24.521 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "name": "BaseBdev2", 00:20:24.521 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "name": "BaseBdev3", 00:20:24.521 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 }, 00:20:24.521 { 00:20:24.521 "name": "BaseBdev4", 00:20:24.521 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:24.521 "is_configured": true, 00:20:24.521 "data_offset": 2048, 00:20:24.521 "data_size": 63488 00:20:24.521 } 00:20:24.521 ] 00:20:24.521 }' 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.521 22:37:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.458 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.716 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.716 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.716 "name": "raid_bdev1", 00:20:25.716 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:25.716 "strip_size_kb": 64, 00:20:25.716 "state": "online", 00:20:25.716 "raid_level": "raid5f", 00:20:25.716 "superblock": true, 00:20:25.716 "num_base_bdevs": 4, 00:20:25.716 "num_base_bdevs_discovered": 4, 00:20:25.716 "num_base_bdevs_operational": 4, 00:20:25.716 "process": { 00:20:25.716 "type": "rebuild", 00:20:25.716 "target": "spare", 00:20:25.716 "progress": { 00:20:25.716 "blocks": 151680, 00:20:25.716 "percent": 79 00:20:25.716 } 00:20:25.716 }, 00:20:25.716 "base_bdevs_list": [ 00:20:25.716 { 00:20:25.716 "name": "spare", 00:20:25.716 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:25.716 "is_configured": true, 00:20:25.716 "data_offset": 2048, 00:20:25.716 "data_size": 63488 00:20:25.716 }, 00:20:25.716 { 00:20:25.716 "name": "BaseBdev2", 00:20:25.716 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:25.716 "is_configured": true, 00:20:25.716 "data_offset": 2048, 00:20:25.716 "data_size": 63488 00:20:25.716 }, 00:20:25.716 { 00:20:25.716 "name": "BaseBdev3", 00:20:25.716 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:25.716 "is_configured": true, 00:20:25.716 "data_offset": 2048, 00:20:25.717 "data_size": 63488 00:20:25.717 }, 00:20:25.717 { 00:20:25.717 "name": "BaseBdev4", 00:20:25.717 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:25.717 "is_configured": true, 00:20:25.717 "data_offset": 2048, 00:20:25.717 "data_size": 63488 00:20:25.717 } 00:20:25.717 ] 00:20:25.717 }' 00:20:25.717 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.717 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.717 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.717 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.717 22:37:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:26.802 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:26.802 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.803 "name": "raid_bdev1", 00:20:26.803 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:26.803 "strip_size_kb": 64, 00:20:26.803 "state": "online", 00:20:26.803 "raid_level": "raid5f", 00:20:26.803 "superblock": true, 00:20:26.803 "num_base_bdevs": 4, 00:20:26.803 "num_base_bdevs_discovered": 4, 00:20:26.803 "num_base_bdevs_operational": 4, 00:20:26.803 "process": { 00:20:26.803 "type": "rebuild", 00:20:26.803 "target": "spare", 00:20:26.803 "progress": { 00:20:26.803 "blocks": 172800, 00:20:26.803 "percent": 90 00:20:26.803 } 00:20:26.803 }, 00:20:26.803 "base_bdevs_list": [ 00:20:26.803 { 00:20:26.803 "name": "spare", 00:20:26.803 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:26.803 "is_configured": true, 00:20:26.803 "data_offset": 2048, 00:20:26.803 "data_size": 63488 00:20:26.803 }, 00:20:26.803 { 00:20:26.803 "name": "BaseBdev2", 00:20:26.803 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:26.803 "is_configured": true, 00:20:26.803 "data_offset": 2048, 00:20:26.803 "data_size": 63488 00:20:26.803 }, 00:20:26.803 { 00:20:26.803 "name": "BaseBdev3", 00:20:26.803 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:26.803 "is_configured": true, 00:20:26.803 "data_offset": 2048, 00:20:26.803 "data_size": 63488 00:20:26.803 }, 00:20:26.803 { 00:20:26.803 "name": "BaseBdev4", 00:20:26.803 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:26.803 "is_configured": true, 00:20:26.803 "data_offset": 2048, 00:20:26.803 "data_size": 63488 00:20:26.803 } 00:20:26.803 ] 00:20:26.803 }' 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.803 22:37:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:27.739 [2024-09-27 22:37:23.389621] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:27.739 [2024-09-27 22:37:23.389717] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:27.739 [2024-09-27 22:37:23.389892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.739 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.998 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.998 "name": "raid_bdev1", 00:20:27.998 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:27.999 "strip_size_kb": 64, 00:20:27.999 "state": "online", 00:20:27.999 "raid_level": "raid5f", 00:20:27.999 "superblock": true, 00:20:27.999 "num_base_bdevs": 4, 00:20:27.999 "num_base_bdevs_discovered": 4, 00:20:27.999 "num_base_bdevs_operational": 4, 00:20:27.999 "base_bdevs_list": [ 00:20:27.999 { 00:20:27.999 "name": "spare", 00:20:27.999 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 }, 00:20:27.999 { 00:20:27.999 "name": "BaseBdev2", 00:20:27.999 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 }, 00:20:27.999 { 00:20:27.999 "name": "BaseBdev3", 00:20:27.999 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 }, 00:20:27.999 { 00:20:27.999 "name": "BaseBdev4", 00:20:27.999 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 } 00:20:27.999 ] 00:20:27.999 }' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.999 "name": "raid_bdev1", 00:20:27.999 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:27.999 "strip_size_kb": 64, 00:20:27.999 "state": "online", 00:20:27.999 "raid_level": "raid5f", 00:20:27.999 "superblock": true, 00:20:27.999 "num_base_bdevs": 4, 00:20:27.999 "num_base_bdevs_discovered": 4, 00:20:27.999 "num_base_bdevs_operational": 4, 00:20:27.999 "base_bdevs_list": [ 00:20:27.999 { 00:20:27.999 "name": "spare", 00:20:27.999 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 }, 00:20:27.999 { 00:20:27.999 "name": "BaseBdev2", 00:20:27.999 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 }, 00:20:27.999 { 00:20:27.999 "name": "BaseBdev3", 00:20:27.999 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 }, 00:20:27.999 { 00:20:27.999 "name": "BaseBdev4", 00:20:27.999 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:27.999 "is_configured": true, 00:20:27.999 "data_offset": 2048, 00:20:27.999 "data_size": 63488 00:20:27.999 } 00:20:27.999 ] 00:20:27.999 }' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.999 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.259 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.259 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.259 "name": "raid_bdev1", 00:20:28.259 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:28.259 "strip_size_kb": 64, 00:20:28.259 "state": "online", 00:20:28.259 "raid_level": "raid5f", 00:20:28.259 "superblock": true, 00:20:28.259 "num_base_bdevs": 4, 00:20:28.259 "num_base_bdevs_discovered": 4, 00:20:28.259 "num_base_bdevs_operational": 4, 00:20:28.259 "base_bdevs_list": [ 00:20:28.259 { 00:20:28.259 "name": "spare", 00:20:28.259 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:28.259 "is_configured": true, 00:20:28.259 "data_offset": 2048, 00:20:28.259 "data_size": 63488 00:20:28.259 }, 00:20:28.259 { 00:20:28.259 "name": "BaseBdev2", 00:20:28.259 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:28.259 "is_configured": true, 00:20:28.259 "data_offset": 2048, 00:20:28.259 "data_size": 63488 00:20:28.259 }, 00:20:28.259 { 00:20:28.259 "name": "BaseBdev3", 00:20:28.259 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:28.259 "is_configured": true, 00:20:28.259 "data_offset": 2048, 00:20:28.259 "data_size": 63488 00:20:28.259 }, 00:20:28.259 { 00:20:28.259 "name": "BaseBdev4", 00:20:28.259 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:28.259 "is_configured": true, 00:20:28.259 "data_offset": 2048, 00:20:28.259 "data_size": 63488 00:20:28.259 } 00:20:28.259 ] 00:20:28.259 }' 00:20:28.260 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.260 22:37:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 [2024-09-27 22:37:24.312125] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.519 [2024-09-27 22:37:24.312280] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.519 [2024-09-27 22:37:24.312421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.519 [2024-09-27 22:37:24.312527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.519 [2024-09-27 22:37:24.312540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:28.519 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:28.779 /dev/nbd0 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.779 1+0 records in 00:20:28.779 1+0 records out 00:20:28.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338073 s, 12.1 MB/s 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:28.779 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:29.038 /dev/nbd1 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:29.038 1+0 records in 00:20:29.038 1+0 records out 00:20:29.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422351 s, 9.7 MB/s 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.038 22:37:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.298 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.558 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.820 [2024-09-27 22:37:25.535446] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:29.820 [2024-09-27 22:37:25.535508] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.820 [2024-09-27 22:37:25.535533] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:29.820 [2024-09-27 22:37:25.535545] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.820 [2024-09-27 22:37:25.538277] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.820 [2024-09-27 22:37:25.538319] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:29.820 [2024-09-27 22:37:25.538434] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:29.820 [2024-09-27 22:37:25.538511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.820 [2024-09-27 22:37:25.538685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.820 [2024-09-27 22:37:25.538779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.820 [2024-09-27 22:37:25.538847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:29.820 spare 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.820 [2024-09-27 22:37:25.638814] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:29.820 [2024-09-27 22:37:25.638864] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:29.820 [2024-09-27 22:37:25.639244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:29.820 [2024-09-27 22:37:25.648337] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:29.820 [2024-09-27 22:37:25.648366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:29.820 [2024-09-27 22:37:25.648595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.820 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.082 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.082 "name": "raid_bdev1", 00:20:30.082 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:30.082 "strip_size_kb": 64, 00:20:30.082 "state": "online", 00:20:30.082 "raid_level": "raid5f", 00:20:30.082 "superblock": true, 00:20:30.082 "num_base_bdevs": 4, 00:20:30.082 "num_base_bdevs_discovered": 4, 00:20:30.082 "num_base_bdevs_operational": 4, 00:20:30.082 "base_bdevs_list": [ 00:20:30.082 { 00:20:30.082 "name": "spare", 00:20:30.082 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:30.082 "is_configured": true, 00:20:30.082 "data_offset": 2048, 00:20:30.082 "data_size": 63488 00:20:30.082 }, 00:20:30.082 { 00:20:30.082 "name": "BaseBdev2", 00:20:30.082 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:30.082 "is_configured": true, 00:20:30.082 "data_offset": 2048, 00:20:30.082 "data_size": 63488 00:20:30.082 }, 00:20:30.082 { 00:20:30.082 "name": "BaseBdev3", 00:20:30.082 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:30.082 "is_configured": true, 00:20:30.082 "data_offset": 2048, 00:20:30.082 "data_size": 63488 00:20:30.082 }, 00:20:30.082 { 00:20:30.082 "name": "BaseBdev4", 00:20:30.082 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:30.082 "is_configured": true, 00:20:30.082 "data_offset": 2048, 00:20:30.082 "data_size": 63488 00:20:30.082 } 00:20:30.082 ] 00:20:30.082 }' 00:20:30.082 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.082 22:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.341 "name": "raid_bdev1", 00:20:30.341 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:30.341 "strip_size_kb": 64, 00:20:30.341 "state": "online", 00:20:30.341 "raid_level": "raid5f", 00:20:30.341 "superblock": true, 00:20:30.341 "num_base_bdevs": 4, 00:20:30.341 "num_base_bdevs_discovered": 4, 00:20:30.341 "num_base_bdevs_operational": 4, 00:20:30.341 "base_bdevs_list": [ 00:20:30.341 { 00:20:30.341 "name": "spare", 00:20:30.341 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:30.341 "is_configured": true, 00:20:30.341 "data_offset": 2048, 00:20:30.341 "data_size": 63488 00:20:30.341 }, 00:20:30.341 { 00:20:30.341 "name": "BaseBdev2", 00:20:30.341 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:30.341 "is_configured": true, 00:20:30.341 "data_offset": 2048, 00:20:30.341 "data_size": 63488 00:20:30.341 }, 00:20:30.341 { 00:20:30.341 "name": "BaseBdev3", 00:20:30.341 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:30.341 "is_configured": true, 00:20:30.341 "data_offset": 2048, 00:20:30.341 "data_size": 63488 00:20:30.341 }, 00:20:30.341 { 00:20:30.341 "name": "BaseBdev4", 00:20:30.341 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:30.341 "is_configured": true, 00:20:30.341 "data_offset": 2048, 00:20:30.341 "data_size": 63488 00:20:30.341 } 00:20:30.341 ] 00:20:30.341 }' 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.341 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.601 [2024-09-27 22:37:26.232142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.601 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.601 "name": "raid_bdev1", 00:20:30.601 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:30.601 "strip_size_kb": 64, 00:20:30.601 "state": "online", 00:20:30.601 "raid_level": "raid5f", 00:20:30.601 "superblock": true, 00:20:30.601 "num_base_bdevs": 4, 00:20:30.601 "num_base_bdevs_discovered": 3, 00:20:30.601 "num_base_bdevs_operational": 3, 00:20:30.601 "base_bdevs_list": [ 00:20:30.601 { 00:20:30.601 "name": null, 00:20:30.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.601 "is_configured": false, 00:20:30.601 "data_offset": 0, 00:20:30.601 "data_size": 63488 00:20:30.601 }, 00:20:30.601 { 00:20:30.601 "name": "BaseBdev2", 00:20:30.602 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:30.602 "is_configured": true, 00:20:30.602 "data_offset": 2048, 00:20:30.602 "data_size": 63488 00:20:30.602 }, 00:20:30.602 { 00:20:30.602 "name": "BaseBdev3", 00:20:30.602 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:30.602 "is_configured": true, 00:20:30.602 "data_offset": 2048, 00:20:30.602 "data_size": 63488 00:20:30.602 }, 00:20:30.602 { 00:20:30.602 "name": "BaseBdev4", 00:20:30.602 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:30.602 "is_configured": true, 00:20:30.602 "data_offset": 2048, 00:20:30.602 "data_size": 63488 00:20:30.602 } 00:20:30.602 ] 00:20:30.602 }' 00:20:30.602 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.602 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.861 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:30.861 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.861 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.861 [2024-09-27 22:37:26.668171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.861 [2024-09-27 22:37:26.668359] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:30.861 [2024-09-27 22:37:26.668382] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:30.861 [2024-09-27 22:37:26.668422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.861 [2024-09-27 22:37:26.685627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:30.861 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.861 22:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:30.861 [2024-09-27 22:37:26.696500] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.244 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.244 "name": "raid_bdev1", 00:20:32.244 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:32.244 "strip_size_kb": 64, 00:20:32.244 "state": "online", 00:20:32.244 "raid_level": "raid5f", 00:20:32.244 "superblock": true, 00:20:32.244 "num_base_bdevs": 4, 00:20:32.244 "num_base_bdevs_discovered": 4, 00:20:32.244 "num_base_bdevs_operational": 4, 00:20:32.244 "process": { 00:20:32.244 "type": "rebuild", 00:20:32.244 "target": "spare", 00:20:32.244 "progress": { 00:20:32.244 "blocks": 19200, 00:20:32.244 "percent": 10 00:20:32.244 } 00:20:32.244 }, 00:20:32.244 "base_bdevs_list": [ 00:20:32.244 { 00:20:32.244 "name": "spare", 00:20:32.244 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:32.244 "is_configured": true, 00:20:32.244 "data_offset": 2048, 00:20:32.244 "data_size": 63488 00:20:32.244 }, 00:20:32.244 { 00:20:32.245 "name": "BaseBdev2", 00:20:32.245 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:32.245 "is_configured": true, 00:20:32.245 "data_offset": 2048, 00:20:32.245 "data_size": 63488 00:20:32.245 }, 00:20:32.245 { 00:20:32.245 "name": "BaseBdev3", 00:20:32.245 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:32.245 "is_configured": true, 00:20:32.245 "data_offset": 2048, 00:20:32.245 "data_size": 63488 00:20:32.245 }, 00:20:32.245 { 00:20:32.245 "name": "BaseBdev4", 00:20:32.245 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:32.245 "is_configured": true, 00:20:32.245 "data_offset": 2048, 00:20:32.245 "data_size": 63488 00:20:32.245 } 00:20:32.245 ] 00:20:32.245 }' 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.245 [2024-09-27 22:37:27.820293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.245 [2024-09-27 22:37:27.904274] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:32.245 [2024-09-27 22:37:27.904380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.245 [2024-09-27 22:37:27.904401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.245 [2024-09-27 22:37:27.904417] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.245 "name": "raid_bdev1", 00:20:32.245 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:32.245 "strip_size_kb": 64, 00:20:32.245 "state": "online", 00:20:32.245 "raid_level": "raid5f", 00:20:32.245 "superblock": true, 00:20:32.245 "num_base_bdevs": 4, 00:20:32.245 "num_base_bdevs_discovered": 3, 00:20:32.245 "num_base_bdevs_operational": 3, 00:20:32.245 "base_bdevs_list": [ 00:20:32.245 { 00:20:32.245 "name": null, 00:20:32.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.245 "is_configured": false, 00:20:32.245 "data_offset": 0, 00:20:32.245 "data_size": 63488 00:20:32.245 }, 00:20:32.245 { 00:20:32.245 "name": "BaseBdev2", 00:20:32.245 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:32.245 "is_configured": true, 00:20:32.245 "data_offset": 2048, 00:20:32.245 "data_size": 63488 00:20:32.245 }, 00:20:32.245 { 00:20:32.245 "name": "BaseBdev3", 00:20:32.245 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:32.245 "is_configured": true, 00:20:32.245 "data_offset": 2048, 00:20:32.245 "data_size": 63488 00:20:32.245 }, 00:20:32.245 { 00:20:32.245 "name": "BaseBdev4", 00:20:32.245 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:32.245 "is_configured": true, 00:20:32.245 "data_offset": 2048, 00:20:32.245 "data_size": 63488 00:20:32.245 } 00:20:32.245 ] 00:20:32.245 }' 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.245 22:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.504 22:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:32.504 22:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.504 22:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.504 [2024-09-27 22:37:28.364155] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:32.504 [2024-09-27 22:37:28.364243] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.504 [2024-09-27 22:37:28.364270] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:32.504 [2024-09-27 22:37:28.364285] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.504 [2024-09-27 22:37:28.364795] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.504 [2024-09-27 22:37:28.364828] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:32.505 [2024-09-27 22:37:28.364925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:32.505 [2024-09-27 22:37:28.364943] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:32.505 [2024-09-27 22:37:28.364956] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:32.505 [2024-09-27 22:37:28.365003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:32.505 [2024-09-27 22:37:28.382331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:32.763 spare 00:20:32.763 22:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.763 22:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:32.763 [2024-09-27 22:37:28.393942] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:33.700 "name": "raid_bdev1", 00:20:33.700 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:33.700 "strip_size_kb": 64, 00:20:33.700 "state": "online", 00:20:33.700 "raid_level": "raid5f", 00:20:33.700 "superblock": true, 00:20:33.700 "num_base_bdevs": 4, 00:20:33.700 "num_base_bdevs_discovered": 4, 00:20:33.700 "num_base_bdevs_operational": 4, 00:20:33.700 "process": { 00:20:33.700 "type": "rebuild", 00:20:33.700 "target": "spare", 00:20:33.700 "progress": { 00:20:33.700 "blocks": 19200, 00:20:33.700 "percent": 10 00:20:33.700 } 00:20:33.700 }, 00:20:33.700 "base_bdevs_list": [ 00:20:33.700 { 00:20:33.700 "name": "spare", 00:20:33.700 "uuid": "97710653-30c3-5ee6-af0a-532dcb6888db", 00:20:33.700 "is_configured": true, 00:20:33.700 "data_offset": 2048, 00:20:33.700 "data_size": 63488 00:20:33.700 }, 00:20:33.700 { 00:20:33.700 "name": "BaseBdev2", 00:20:33.700 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:33.700 "is_configured": true, 00:20:33.700 "data_offset": 2048, 00:20:33.700 "data_size": 63488 00:20:33.700 }, 00:20:33.700 { 00:20:33.700 "name": "BaseBdev3", 00:20:33.700 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:33.700 "is_configured": true, 00:20:33.700 "data_offset": 2048, 00:20:33.700 "data_size": 63488 00:20:33.700 }, 00:20:33.700 { 00:20:33.700 "name": "BaseBdev4", 00:20:33.700 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:33.700 "is_configured": true, 00:20:33.700 "data_offset": 2048, 00:20:33.700 "data_size": 63488 00:20:33.700 } 00:20:33.700 ] 00:20:33.700 }' 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:33.700 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.701 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.701 [2024-09-27 22:37:29.521778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.959 [2024-09-27 22:37:29.601731] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:33.959 [2024-09-27 22:37:29.601817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.959 [2024-09-27 22:37:29.601841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.959 [2024-09-27 22:37:29.601850] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.959 "name": "raid_bdev1", 00:20:33.959 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:33.959 "strip_size_kb": 64, 00:20:33.959 "state": "online", 00:20:33.959 "raid_level": "raid5f", 00:20:33.959 "superblock": true, 00:20:33.959 "num_base_bdevs": 4, 00:20:33.959 "num_base_bdevs_discovered": 3, 00:20:33.959 "num_base_bdevs_operational": 3, 00:20:33.959 "base_bdevs_list": [ 00:20:33.959 { 00:20:33.959 "name": null, 00:20:33.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.959 "is_configured": false, 00:20:33.959 "data_offset": 0, 00:20:33.959 "data_size": 63488 00:20:33.959 }, 00:20:33.959 { 00:20:33.959 "name": "BaseBdev2", 00:20:33.959 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:33.959 "is_configured": true, 00:20:33.959 "data_offset": 2048, 00:20:33.959 "data_size": 63488 00:20:33.959 }, 00:20:33.959 { 00:20:33.959 "name": "BaseBdev3", 00:20:33.959 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:33.959 "is_configured": true, 00:20:33.959 "data_offset": 2048, 00:20:33.959 "data_size": 63488 00:20:33.959 }, 00:20:33.959 { 00:20:33.959 "name": "BaseBdev4", 00:20:33.959 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:33.959 "is_configured": true, 00:20:33.959 "data_offset": 2048, 00:20:33.959 "data_size": 63488 00:20:33.959 } 00:20:33.959 ] 00:20:33.959 }' 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.959 22:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.218 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.219 "name": "raid_bdev1", 00:20:34.219 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:34.219 "strip_size_kb": 64, 00:20:34.219 "state": "online", 00:20:34.219 "raid_level": "raid5f", 00:20:34.219 "superblock": true, 00:20:34.219 "num_base_bdevs": 4, 00:20:34.219 "num_base_bdevs_discovered": 3, 00:20:34.219 "num_base_bdevs_operational": 3, 00:20:34.219 "base_bdevs_list": [ 00:20:34.219 { 00:20:34.219 "name": null, 00:20:34.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.219 "is_configured": false, 00:20:34.219 "data_offset": 0, 00:20:34.219 "data_size": 63488 00:20:34.219 }, 00:20:34.219 { 00:20:34.219 "name": "BaseBdev2", 00:20:34.219 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:34.219 "is_configured": true, 00:20:34.219 "data_offset": 2048, 00:20:34.219 "data_size": 63488 00:20:34.219 }, 00:20:34.219 { 00:20:34.219 "name": "BaseBdev3", 00:20:34.219 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:34.219 "is_configured": true, 00:20:34.219 "data_offset": 2048, 00:20:34.219 "data_size": 63488 00:20:34.219 }, 00:20:34.219 { 00:20:34.219 "name": "BaseBdev4", 00:20:34.219 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:34.219 "is_configured": true, 00:20:34.219 "data_offset": 2048, 00:20:34.219 "data_size": 63488 00:20:34.219 } 00:20:34.219 ] 00:20:34.219 }' 00:20:34.219 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.478 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.478 [2024-09-27 22:37:30.174845] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:34.478 [2024-09-27 22:37:30.174912] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.478 [2024-09-27 22:37:30.174936] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:34.478 [2024-09-27 22:37:30.174949] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.478 [2024-09-27 22:37:30.175441] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.478 [2024-09-27 22:37:30.175470] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:34.478 [2024-09-27 22:37:30.175558] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:34.479 [2024-09-27 22:37:30.175573] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:34.479 [2024-09-27 22:37:30.175586] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:34.479 [2024-09-27 22:37:30.175598] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:34.479 BaseBdev1 00:20:34.479 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.479 22:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.417 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.418 "name": "raid_bdev1", 00:20:35.418 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:35.418 "strip_size_kb": 64, 00:20:35.418 "state": "online", 00:20:35.418 "raid_level": "raid5f", 00:20:35.418 "superblock": true, 00:20:35.418 "num_base_bdevs": 4, 00:20:35.418 "num_base_bdevs_discovered": 3, 00:20:35.418 "num_base_bdevs_operational": 3, 00:20:35.418 "base_bdevs_list": [ 00:20:35.418 { 00:20:35.418 "name": null, 00:20:35.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.418 "is_configured": false, 00:20:35.418 "data_offset": 0, 00:20:35.418 "data_size": 63488 00:20:35.418 }, 00:20:35.418 { 00:20:35.418 "name": "BaseBdev2", 00:20:35.418 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:35.418 "is_configured": true, 00:20:35.418 "data_offset": 2048, 00:20:35.418 "data_size": 63488 00:20:35.418 }, 00:20:35.418 { 00:20:35.418 "name": "BaseBdev3", 00:20:35.418 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:35.418 "is_configured": true, 00:20:35.418 "data_offset": 2048, 00:20:35.418 "data_size": 63488 00:20:35.418 }, 00:20:35.418 { 00:20:35.418 "name": "BaseBdev4", 00:20:35.418 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:35.418 "is_configured": true, 00:20:35.418 "data_offset": 2048, 00:20:35.418 "data_size": 63488 00:20:35.418 } 00:20:35.418 ] 00:20:35.418 }' 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.418 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.986 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.986 "name": "raid_bdev1", 00:20:35.986 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:35.986 "strip_size_kb": 64, 00:20:35.986 "state": "online", 00:20:35.986 "raid_level": "raid5f", 00:20:35.986 "superblock": true, 00:20:35.986 "num_base_bdevs": 4, 00:20:35.986 "num_base_bdevs_discovered": 3, 00:20:35.986 "num_base_bdevs_operational": 3, 00:20:35.986 "base_bdevs_list": [ 00:20:35.986 { 00:20:35.986 "name": null, 00:20:35.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.986 "is_configured": false, 00:20:35.986 "data_offset": 0, 00:20:35.986 "data_size": 63488 00:20:35.987 }, 00:20:35.987 { 00:20:35.987 "name": "BaseBdev2", 00:20:35.987 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:35.987 "is_configured": true, 00:20:35.987 "data_offset": 2048, 00:20:35.987 "data_size": 63488 00:20:35.987 }, 00:20:35.987 { 00:20:35.987 "name": "BaseBdev3", 00:20:35.987 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:35.987 "is_configured": true, 00:20:35.987 "data_offset": 2048, 00:20:35.987 "data_size": 63488 00:20:35.987 }, 00:20:35.987 { 00:20:35.987 "name": "BaseBdev4", 00:20:35.987 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:35.987 "is_configured": true, 00:20:35.987 "data_offset": 2048, 00:20:35.987 "data_size": 63488 00:20:35.987 } 00:20:35.987 ] 00:20:35.987 }' 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.987 [2024-09-27 22:37:31.712828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.987 [2024-09-27 22:37:31.713022] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:35.987 [2024-09-27 22:37:31.713043] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:35.987 request: 00:20:35.987 { 00:20:35.987 "base_bdev": "BaseBdev1", 00:20:35.987 "raid_bdev": "raid_bdev1", 00:20:35.987 "method": "bdev_raid_add_base_bdev", 00:20:35.987 "req_id": 1 00:20:35.987 } 00:20:35.987 Got JSON-RPC error response 00:20:35.987 response: 00:20:35.987 { 00:20:35.987 "code": -22, 00:20:35.987 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:35.987 } 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.987 22:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:36.923 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:36.923 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.923 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.923 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.923 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.923 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.924 "name": "raid_bdev1", 00:20:36.924 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:36.924 "strip_size_kb": 64, 00:20:36.924 "state": "online", 00:20:36.924 "raid_level": "raid5f", 00:20:36.924 "superblock": true, 00:20:36.924 "num_base_bdevs": 4, 00:20:36.924 "num_base_bdevs_discovered": 3, 00:20:36.924 "num_base_bdevs_operational": 3, 00:20:36.924 "base_bdevs_list": [ 00:20:36.924 { 00:20:36.924 "name": null, 00:20:36.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.924 "is_configured": false, 00:20:36.924 "data_offset": 0, 00:20:36.924 "data_size": 63488 00:20:36.924 }, 00:20:36.924 { 00:20:36.924 "name": "BaseBdev2", 00:20:36.924 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:36.924 "is_configured": true, 00:20:36.924 "data_offset": 2048, 00:20:36.924 "data_size": 63488 00:20:36.924 }, 00:20:36.924 { 00:20:36.924 "name": "BaseBdev3", 00:20:36.924 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:36.924 "is_configured": true, 00:20:36.924 "data_offset": 2048, 00:20:36.924 "data_size": 63488 00:20:36.924 }, 00:20:36.924 { 00:20:36.924 "name": "BaseBdev4", 00:20:36.924 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:36.924 "is_configured": true, 00:20:36.924 "data_offset": 2048, 00:20:36.924 "data_size": 63488 00:20:36.924 } 00:20:36.924 ] 00:20:36.924 }' 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.924 22:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.492 "name": "raid_bdev1", 00:20:37.492 "uuid": "546dd16f-a12b-4481-a167-b1ce56cd2ae0", 00:20:37.492 "strip_size_kb": 64, 00:20:37.492 "state": "online", 00:20:37.492 "raid_level": "raid5f", 00:20:37.492 "superblock": true, 00:20:37.492 "num_base_bdevs": 4, 00:20:37.492 "num_base_bdevs_discovered": 3, 00:20:37.492 "num_base_bdevs_operational": 3, 00:20:37.492 "base_bdevs_list": [ 00:20:37.492 { 00:20:37.492 "name": null, 00:20:37.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.492 "is_configured": false, 00:20:37.492 "data_offset": 0, 00:20:37.492 "data_size": 63488 00:20:37.492 }, 00:20:37.492 { 00:20:37.492 "name": "BaseBdev2", 00:20:37.492 "uuid": "5f2b585d-c743-5e6c-8c25-a279cc001750", 00:20:37.492 "is_configured": true, 00:20:37.492 "data_offset": 2048, 00:20:37.492 "data_size": 63488 00:20:37.492 }, 00:20:37.492 { 00:20:37.492 "name": "BaseBdev3", 00:20:37.492 "uuid": "f5a174da-c58d-5df0-8051-d04c62db3cbc", 00:20:37.492 "is_configured": true, 00:20:37.492 "data_offset": 2048, 00:20:37.492 "data_size": 63488 00:20:37.492 }, 00:20:37.492 { 00:20:37.492 "name": "BaseBdev4", 00:20:37.492 "uuid": "ad776b3b-226f-5da1-b937-8b00da72a30f", 00:20:37.492 "is_configured": true, 00:20:37.492 "data_offset": 2048, 00:20:37.492 "data_size": 63488 00:20:37.492 } 00:20:37.492 ] 00:20:37.492 }' 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86259 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86259 ']' 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86259 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86259 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:37.492 killing process with pid 86259 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86259' 00:20:37.492 Received shutdown signal, test time was about 60.000000 seconds 00:20:37.492 00:20:37.492 Latency(us) 00:20:37.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.492 =================================================================================================================== 00:20:37.492 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.492 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86259 00:20:37.493 [2024-09-27 22:37:33.341162] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.493 22:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86259 00:20:37.493 [2024-09-27 22:37:33.341298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.493 [2024-09-27 22:37:33.341395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.493 [2024-09-27 22:37:33.341413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:38.063 [2024-09-27 22:37:33.851109] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.971 22:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:39.971 00:20:39.971 real 0m27.881s 00:20:39.971 user 0m34.315s 00:20:39.971 sys 0m3.405s 00:20:39.972 22:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:39.972 22:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.972 ************************************ 00:20:39.972 END TEST raid5f_rebuild_test_sb 00:20:39.972 ************************************ 00:20:40.228 22:37:35 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:40.228 22:37:35 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:40.228 22:37:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:40.228 22:37:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.228 22:37:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 ************************************ 00:20:40.228 START TEST raid_state_function_test_sb_4k 00:20:40.228 ************************************ 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:40.228 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=87079 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:40.229 Process raid pid: 87079 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87079' 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 87079 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 87079 ']' 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.229 22:37:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.229 [2024-09-27 22:37:35.999774] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:20:40.229 [2024-09-27 22:37:36.000467] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.486 [2024-09-27 22:37:36.171061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.744 [2024-09-27 22:37:36.403619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.002 [2024-09-27 22:37:36.651120] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.002 [2024-09-27 22:37:36.651155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.261 [2024-09-27 22:37:37.117608] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.261 [2024-09-27 22:37:37.117662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.261 [2024-09-27 22:37:37.117673] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.261 [2024-09-27 22:37:37.117686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.261 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.520 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.520 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.520 "name": "Existed_Raid", 00:20:41.520 "uuid": "9adc9822-69cc-4c3c-bcc0-f3ff5e2a2705", 00:20:41.520 "strip_size_kb": 0, 00:20:41.520 "state": "configuring", 00:20:41.520 "raid_level": "raid1", 00:20:41.520 "superblock": true, 00:20:41.520 "num_base_bdevs": 2, 00:20:41.520 "num_base_bdevs_discovered": 0, 00:20:41.520 "num_base_bdevs_operational": 2, 00:20:41.520 "base_bdevs_list": [ 00:20:41.520 { 00:20:41.520 "name": "BaseBdev1", 00:20:41.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.520 "is_configured": false, 00:20:41.520 "data_offset": 0, 00:20:41.520 "data_size": 0 00:20:41.520 }, 00:20:41.520 { 00:20:41.520 "name": "BaseBdev2", 00:20:41.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.520 "is_configured": false, 00:20:41.520 "data_offset": 0, 00:20:41.520 "data_size": 0 00:20:41.520 } 00:20:41.520 ] 00:20:41.520 }' 00:20:41.520 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.520 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.779 [2024-09-27 22:37:37.532989] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:41.779 [2024-09-27 22:37:37.533031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.779 [2024-09-27 22:37:37.545157] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.779 [2024-09-27 22:37:37.545200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.779 [2024-09-27 22:37:37.545210] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.779 [2024-09-27 22:37:37.545225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.779 [2024-09-27 22:37:37.599055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.779 BaseBdev1 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.779 [ 00:20:41.779 { 00:20:41.779 "name": "BaseBdev1", 00:20:41.779 "aliases": [ 00:20:41.779 "48116626-7d27-4286-b5a5-e31881119c05" 00:20:41.779 ], 00:20:41.779 "product_name": "Malloc disk", 00:20:41.779 "block_size": 4096, 00:20:41.779 "num_blocks": 8192, 00:20:41.779 "uuid": "48116626-7d27-4286-b5a5-e31881119c05", 00:20:41.779 "assigned_rate_limits": { 00:20:41.779 "rw_ios_per_sec": 0, 00:20:41.779 "rw_mbytes_per_sec": 0, 00:20:41.779 "r_mbytes_per_sec": 0, 00:20:41.779 "w_mbytes_per_sec": 0 00:20:41.779 }, 00:20:41.779 "claimed": true, 00:20:41.779 "claim_type": "exclusive_write", 00:20:41.779 "zoned": false, 00:20:41.779 "supported_io_types": { 00:20:41.779 "read": true, 00:20:41.779 "write": true, 00:20:41.779 "unmap": true, 00:20:41.779 "flush": true, 00:20:41.779 "reset": true, 00:20:41.779 "nvme_admin": false, 00:20:41.779 "nvme_io": false, 00:20:41.779 "nvme_io_md": false, 00:20:41.779 "write_zeroes": true, 00:20:41.779 "zcopy": true, 00:20:41.779 "get_zone_info": false, 00:20:41.779 "zone_management": false, 00:20:41.779 "zone_append": false, 00:20:41.779 "compare": false, 00:20:41.779 "compare_and_write": false, 00:20:41.779 "abort": true, 00:20:41.779 "seek_hole": false, 00:20:41.779 "seek_data": false, 00:20:41.779 "copy": true, 00:20:41.779 "nvme_iov_md": false 00:20:41.779 }, 00:20:41.779 "memory_domains": [ 00:20:41.779 { 00:20:41.779 "dma_device_id": "system", 00:20:41.779 "dma_device_type": 1 00:20:41.779 }, 00:20:41.779 { 00:20:41.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.779 "dma_device_type": 2 00:20:41.779 } 00:20:41.779 ], 00:20:41.779 "driver_specific": {} 00:20:41.779 } 00:20:41.779 ] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.779 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.038 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.038 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.038 "name": "Existed_Raid", 00:20:42.038 "uuid": "a773ca69-de20-4d3e-9489-5f383d66782b", 00:20:42.038 "strip_size_kb": 0, 00:20:42.038 "state": "configuring", 00:20:42.038 "raid_level": "raid1", 00:20:42.038 "superblock": true, 00:20:42.038 "num_base_bdevs": 2, 00:20:42.038 "num_base_bdevs_discovered": 1, 00:20:42.038 "num_base_bdevs_operational": 2, 00:20:42.038 "base_bdevs_list": [ 00:20:42.038 { 00:20:42.038 "name": "BaseBdev1", 00:20:42.038 "uuid": "48116626-7d27-4286-b5a5-e31881119c05", 00:20:42.038 "is_configured": true, 00:20:42.038 "data_offset": 256, 00:20:42.038 "data_size": 7936 00:20:42.038 }, 00:20:42.038 { 00:20:42.038 "name": "BaseBdev2", 00:20:42.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.038 "is_configured": false, 00:20:42.038 "data_offset": 0, 00:20:42.038 "data_size": 0 00:20:42.038 } 00:20:42.038 ] 00:20:42.038 }' 00:20:42.038 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.038 22:37:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.297 [2024-09-27 22:37:38.074414] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:42.297 [2024-09-27 22:37:38.074467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.297 [2024-09-27 22:37:38.082434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.297 [2024-09-27 22:37:38.084620] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:42.297 [2024-09-27 22:37:38.084668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.297 "name": "Existed_Raid", 00:20:42.297 "uuid": "2ba76061-159a-4e03-8df9-16854f69c5eb", 00:20:42.297 "strip_size_kb": 0, 00:20:42.297 "state": "configuring", 00:20:42.297 "raid_level": "raid1", 00:20:42.297 "superblock": true, 00:20:42.297 "num_base_bdevs": 2, 00:20:42.297 "num_base_bdevs_discovered": 1, 00:20:42.297 "num_base_bdevs_operational": 2, 00:20:42.297 "base_bdevs_list": [ 00:20:42.297 { 00:20:42.297 "name": "BaseBdev1", 00:20:42.297 "uuid": "48116626-7d27-4286-b5a5-e31881119c05", 00:20:42.297 "is_configured": true, 00:20:42.297 "data_offset": 256, 00:20:42.297 "data_size": 7936 00:20:42.297 }, 00:20:42.297 { 00:20:42.297 "name": "BaseBdev2", 00:20:42.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.297 "is_configured": false, 00:20:42.297 "data_offset": 0, 00:20:42.297 "data_size": 0 00:20:42.297 } 00:20:42.297 ] 00:20:42.297 }' 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.297 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.864 [2024-09-27 22:37:38.558611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:42.864 [2024-09-27 22:37:38.558871] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:42.864 [2024-09-27 22:37:38.558891] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:42.864 [2024-09-27 22:37:38.559188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:42.864 BaseBdev2 00:20:42.864 [2024-09-27 22:37:38.559385] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:42.864 [2024-09-27 22:37:38.559401] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:42.864 [2024-09-27 22:37:38.559554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.864 [ 00:20:42.864 { 00:20:42.864 "name": "BaseBdev2", 00:20:42.864 "aliases": [ 00:20:42.864 "f4bde2e9-78e1-47d4-accd-f2dd9f83c462" 00:20:42.864 ], 00:20:42.864 "product_name": "Malloc disk", 00:20:42.864 "block_size": 4096, 00:20:42.864 "num_blocks": 8192, 00:20:42.864 "uuid": "f4bde2e9-78e1-47d4-accd-f2dd9f83c462", 00:20:42.864 "assigned_rate_limits": { 00:20:42.864 "rw_ios_per_sec": 0, 00:20:42.864 "rw_mbytes_per_sec": 0, 00:20:42.864 "r_mbytes_per_sec": 0, 00:20:42.864 "w_mbytes_per_sec": 0 00:20:42.864 }, 00:20:42.864 "claimed": true, 00:20:42.864 "claim_type": "exclusive_write", 00:20:42.864 "zoned": false, 00:20:42.864 "supported_io_types": { 00:20:42.864 "read": true, 00:20:42.864 "write": true, 00:20:42.864 "unmap": true, 00:20:42.864 "flush": true, 00:20:42.864 "reset": true, 00:20:42.864 "nvme_admin": false, 00:20:42.864 "nvme_io": false, 00:20:42.864 "nvme_io_md": false, 00:20:42.864 "write_zeroes": true, 00:20:42.864 "zcopy": true, 00:20:42.864 "get_zone_info": false, 00:20:42.864 "zone_management": false, 00:20:42.864 "zone_append": false, 00:20:42.864 "compare": false, 00:20:42.864 "compare_and_write": false, 00:20:42.864 "abort": true, 00:20:42.864 "seek_hole": false, 00:20:42.864 "seek_data": false, 00:20:42.864 "copy": true, 00:20:42.864 "nvme_iov_md": false 00:20:42.864 }, 00:20:42.864 "memory_domains": [ 00:20:42.864 { 00:20:42.864 "dma_device_id": "system", 00:20:42.864 "dma_device_type": 1 00:20:42.864 }, 00:20:42.864 { 00:20:42.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.864 "dma_device_type": 2 00:20:42.864 } 00:20:42.864 ], 00:20:42.864 "driver_specific": {} 00:20:42.864 } 00:20:42.864 ] 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.864 "name": "Existed_Raid", 00:20:42.864 "uuid": "2ba76061-159a-4e03-8df9-16854f69c5eb", 00:20:42.864 "strip_size_kb": 0, 00:20:42.864 "state": "online", 00:20:42.864 "raid_level": "raid1", 00:20:42.864 "superblock": true, 00:20:42.864 "num_base_bdevs": 2, 00:20:42.864 "num_base_bdevs_discovered": 2, 00:20:42.864 "num_base_bdevs_operational": 2, 00:20:42.864 "base_bdevs_list": [ 00:20:42.864 { 00:20:42.864 "name": "BaseBdev1", 00:20:42.864 "uuid": "48116626-7d27-4286-b5a5-e31881119c05", 00:20:42.864 "is_configured": true, 00:20:42.864 "data_offset": 256, 00:20:42.864 "data_size": 7936 00:20:42.864 }, 00:20:42.864 { 00:20:42.864 "name": "BaseBdev2", 00:20:42.864 "uuid": "f4bde2e9-78e1-47d4-accd-f2dd9f83c462", 00:20:42.864 "is_configured": true, 00:20:42.864 "data_offset": 256, 00:20:42.864 "data_size": 7936 00:20:42.864 } 00:20:42.864 ] 00:20:42.864 }' 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.864 22:37:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.428 [2024-09-27 22:37:39.058372] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:43.428 "name": "Existed_Raid", 00:20:43.428 "aliases": [ 00:20:43.428 "2ba76061-159a-4e03-8df9-16854f69c5eb" 00:20:43.428 ], 00:20:43.428 "product_name": "Raid Volume", 00:20:43.428 "block_size": 4096, 00:20:43.428 "num_blocks": 7936, 00:20:43.428 "uuid": "2ba76061-159a-4e03-8df9-16854f69c5eb", 00:20:43.428 "assigned_rate_limits": { 00:20:43.428 "rw_ios_per_sec": 0, 00:20:43.428 "rw_mbytes_per_sec": 0, 00:20:43.428 "r_mbytes_per_sec": 0, 00:20:43.428 "w_mbytes_per_sec": 0 00:20:43.428 }, 00:20:43.428 "claimed": false, 00:20:43.428 "zoned": false, 00:20:43.428 "supported_io_types": { 00:20:43.428 "read": true, 00:20:43.428 "write": true, 00:20:43.428 "unmap": false, 00:20:43.428 "flush": false, 00:20:43.428 "reset": true, 00:20:43.428 "nvme_admin": false, 00:20:43.428 "nvme_io": false, 00:20:43.428 "nvme_io_md": false, 00:20:43.428 "write_zeroes": true, 00:20:43.428 "zcopy": false, 00:20:43.428 "get_zone_info": false, 00:20:43.428 "zone_management": false, 00:20:43.428 "zone_append": false, 00:20:43.428 "compare": false, 00:20:43.428 "compare_and_write": false, 00:20:43.428 "abort": false, 00:20:43.428 "seek_hole": false, 00:20:43.428 "seek_data": false, 00:20:43.428 "copy": false, 00:20:43.428 "nvme_iov_md": false 00:20:43.428 }, 00:20:43.428 "memory_domains": [ 00:20:43.428 { 00:20:43.428 "dma_device_id": "system", 00:20:43.428 "dma_device_type": 1 00:20:43.428 }, 00:20:43.428 { 00:20:43.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.428 "dma_device_type": 2 00:20:43.428 }, 00:20:43.428 { 00:20:43.428 "dma_device_id": "system", 00:20:43.428 "dma_device_type": 1 00:20:43.428 }, 00:20:43.428 { 00:20:43.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.428 "dma_device_type": 2 00:20:43.428 } 00:20:43.428 ], 00:20:43.428 "driver_specific": { 00:20:43.428 "raid": { 00:20:43.428 "uuid": "2ba76061-159a-4e03-8df9-16854f69c5eb", 00:20:43.428 "strip_size_kb": 0, 00:20:43.428 "state": "online", 00:20:43.428 "raid_level": "raid1", 00:20:43.428 "superblock": true, 00:20:43.428 "num_base_bdevs": 2, 00:20:43.428 "num_base_bdevs_discovered": 2, 00:20:43.428 "num_base_bdevs_operational": 2, 00:20:43.428 "base_bdevs_list": [ 00:20:43.428 { 00:20:43.428 "name": "BaseBdev1", 00:20:43.428 "uuid": "48116626-7d27-4286-b5a5-e31881119c05", 00:20:43.428 "is_configured": true, 00:20:43.428 "data_offset": 256, 00:20:43.428 "data_size": 7936 00:20:43.428 }, 00:20:43.428 { 00:20:43.428 "name": "BaseBdev2", 00:20:43.428 "uuid": "f4bde2e9-78e1-47d4-accd-f2dd9f83c462", 00:20:43.428 "is_configured": true, 00:20:43.428 "data_offset": 256, 00:20:43.428 "data_size": 7936 00:20:43.428 } 00:20:43.428 ] 00:20:43.428 } 00:20:43.428 } 00:20:43.428 }' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:43.428 BaseBdev2' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:43.428 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.429 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.429 [2024-09-27 22:37:39.262133] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.687 "name": "Existed_Raid", 00:20:43.687 "uuid": "2ba76061-159a-4e03-8df9-16854f69c5eb", 00:20:43.687 "strip_size_kb": 0, 00:20:43.687 "state": "online", 00:20:43.687 "raid_level": "raid1", 00:20:43.687 "superblock": true, 00:20:43.687 "num_base_bdevs": 2, 00:20:43.687 "num_base_bdevs_discovered": 1, 00:20:43.687 "num_base_bdevs_operational": 1, 00:20:43.687 "base_bdevs_list": [ 00:20:43.687 { 00:20:43.687 "name": null, 00:20:43.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.687 "is_configured": false, 00:20:43.687 "data_offset": 0, 00:20:43.687 "data_size": 7936 00:20:43.687 }, 00:20:43.687 { 00:20:43.687 "name": "BaseBdev2", 00:20:43.687 "uuid": "f4bde2e9-78e1-47d4-accd-f2dd9f83c462", 00:20:43.687 "is_configured": true, 00:20:43.687 "data_offset": 256, 00:20:43.687 "data_size": 7936 00:20:43.687 } 00:20:43.687 ] 00:20:43.687 }' 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.687 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.944 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.944 [2024-09-27 22:37:39.791140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:43.944 [2024-09-27 22:37:39.791347] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.202 [2024-09-27 22:37:39.885171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.202 [2024-09-27 22:37:39.885220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.202 [2024-09-27 22:37:39.885235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 87079 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 87079 ']' 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 87079 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87079 00:20:44.202 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.203 killing process with pid 87079 00:20:44.203 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.203 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87079' 00:20:44.203 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 87079 00:20:44.203 [2024-09-27 22:37:39.986839] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.203 22:37:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 87079 00:20:44.203 [2024-09-27 22:37:40.003520] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.171 22:37:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:46.171 00:20:46.171 real 0m6.098s 00:20:46.171 user 0m8.089s 00:20:46.171 sys 0m1.026s 00:20:46.171 22:37:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.171 22:37:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.171 ************************************ 00:20:46.171 END TEST raid_state_function_test_sb_4k 00:20:46.171 ************************************ 00:20:46.429 22:37:42 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:46.429 22:37:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:46.429 22:37:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.429 22:37:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.429 ************************************ 00:20:46.429 START TEST raid_superblock_test_4k 00:20:46.429 ************************************ 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:46.429 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87338 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87338 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 87338 ']' 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.430 22:37:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.430 [2024-09-27 22:37:42.164603] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:20:46.430 [2024-09-27 22:37:42.164907] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87338 ] 00:20:46.688 [2024-09-27 22:37:42.337041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.688 [2024-09-27 22:37:42.564250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.946 [2024-09-27 22:37:42.811305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.946 [2024-09-27 22:37:42.811335] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 malloc1 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 [2024-09-27 22:37:43.323394] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:47.511 [2024-09-27 22:37:43.323584] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.511 [2024-09-27 22:37:43.323646] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:47.511 [2024-09-27 22:37:43.323740] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.511 [2024-09-27 22:37:43.326146] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.511 [2024-09-27 22:37:43.326283] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:47.511 pt1 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 malloc2 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.511 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 [2024-09-27 22:37:43.385906] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:47.511 [2024-09-27 22:37:43.386090] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.511 [2024-09-27 22:37:43.386155] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:47.511 [2024-09-27 22:37:43.386231] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.511 [2024-09-27 22:37:43.388601] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.511 [2024-09-27 22:37:43.388738] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:47.770 pt2 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.770 [2024-09-27 22:37:43.397951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:47.770 [2024-09-27 22:37:43.400112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:47.770 [2024-09-27 22:37:43.400293] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:47.770 [2024-09-27 22:37:43.400306] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:47.770 [2024-09-27 22:37:43.400557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:47.770 [2024-09-27 22:37:43.400712] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:47.770 [2024-09-27 22:37:43.400726] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:47.770 [2024-09-27 22:37:43.400863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.770 "name": "raid_bdev1", 00:20:47.770 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:47.770 "strip_size_kb": 0, 00:20:47.770 "state": "online", 00:20:47.770 "raid_level": "raid1", 00:20:47.770 "superblock": true, 00:20:47.770 "num_base_bdevs": 2, 00:20:47.770 "num_base_bdevs_discovered": 2, 00:20:47.770 "num_base_bdevs_operational": 2, 00:20:47.770 "base_bdevs_list": [ 00:20:47.770 { 00:20:47.770 "name": "pt1", 00:20:47.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:47.770 "is_configured": true, 00:20:47.770 "data_offset": 256, 00:20:47.770 "data_size": 7936 00:20:47.770 }, 00:20:47.770 { 00:20:47.770 "name": "pt2", 00:20:47.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:47.770 "is_configured": true, 00:20:47.770 "data_offset": 256, 00:20:47.770 "data_size": 7936 00:20:47.770 } 00:20:47.770 ] 00:20:47.770 }' 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.770 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.029 [2024-09-27 22:37:43.773638] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.029 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.029 "name": "raid_bdev1", 00:20:48.029 "aliases": [ 00:20:48.029 "72a5c429-c367-4204-87d4-ddadde50a5c7" 00:20:48.029 ], 00:20:48.029 "product_name": "Raid Volume", 00:20:48.029 "block_size": 4096, 00:20:48.029 "num_blocks": 7936, 00:20:48.029 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:48.029 "assigned_rate_limits": { 00:20:48.029 "rw_ios_per_sec": 0, 00:20:48.029 "rw_mbytes_per_sec": 0, 00:20:48.029 "r_mbytes_per_sec": 0, 00:20:48.029 "w_mbytes_per_sec": 0 00:20:48.029 }, 00:20:48.029 "claimed": false, 00:20:48.029 "zoned": false, 00:20:48.029 "supported_io_types": { 00:20:48.029 "read": true, 00:20:48.029 "write": true, 00:20:48.029 "unmap": false, 00:20:48.029 "flush": false, 00:20:48.029 "reset": true, 00:20:48.029 "nvme_admin": false, 00:20:48.029 "nvme_io": false, 00:20:48.029 "nvme_io_md": false, 00:20:48.029 "write_zeroes": true, 00:20:48.029 "zcopy": false, 00:20:48.029 "get_zone_info": false, 00:20:48.029 "zone_management": false, 00:20:48.029 "zone_append": false, 00:20:48.030 "compare": false, 00:20:48.030 "compare_and_write": false, 00:20:48.030 "abort": false, 00:20:48.030 "seek_hole": false, 00:20:48.030 "seek_data": false, 00:20:48.030 "copy": false, 00:20:48.030 "nvme_iov_md": false 00:20:48.030 }, 00:20:48.030 "memory_domains": [ 00:20:48.030 { 00:20:48.030 "dma_device_id": "system", 00:20:48.030 "dma_device_type": 1 00:20:48.030 }, 00:20:48.030 { 00:20:48.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.030 "dma_device_type": 2 00:20:48.030 }, 00:20:48.030 { 00:20:48.030 "dma_device_id": "system", 00:20:48.030 "dma_device_type": 1 00:20:48.030 }, 00:20:48.030 { 00:20:48.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.030 "dma_device_type": 2 00:20:48.030 } 00:20:48.030 ], 00:20:48.030 "driver_specific": { 00:20:48.030 "raid": { 00:20:48.030 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:48.030 "strip_size_kb": 0, 00:20:48.030 "state": "online", 00:20:48.030 "raid_level": "raid1", 00:20:48.030 "superblock": true, 00:20:48.030 "num_base_bdevs": 2, 00:20:48.030 "num_base_bdevs_discovered": 2, 00:20:48.030 "num_base_bdevs_operational": 2, 00:20:48.030 "base_bdevs_list": [ 00:20:48.030 { 00:20:48.030 "name": "pt1", 00:20:48.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:48.030 "is_configured": true, 00:20:48.030 "data_offset": 256, 00:20:48.030 "data_size": 7936 00:20:48.030 }, 00:20:48.030 { 00:20:48.030 "name": "pt2", 00:20:48.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.030 "is_configured": true, 00:20:48.030 "data_offset": 256, 00:20:48.030 "data_size": 7936 00:20:48.030 } 00:20:48.030 ] 00:20:48.030 } 00:20:48.030 } 00:20:48.030 }' 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:48.030 pt2' 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.030 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:48.289 [2024-09-27 22:37:43.981324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=72a5c429-c367-4204-87d4-ddadde50a5c7 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 72a5c429-c367-4204-87d4-ddadde50a5c7 ']' 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 [2024-09-27 22:37:44.021093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:48.289 [2024-09-27 22:37:44.021213] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.289 [2024-09-27 22:37:44.021353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.289 [2024-09-27 22:37:44.021489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.289 [2024-09-27 22:37:44.021649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.289 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.289 [2024-09-27 22:37:44.157106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:48.289 [2024-09-27 22:37:44.159217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:48.289 [2024-09-27 22:37:44.159278] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:48.289 [2024-09-27 22:37:44.159333] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:48.289 [2024-09-27 22:37:44.159350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:48.289 [2024-09-27 22:37:44.159362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:48.289 request: 00:20:48.289 { 00:20:48.289 "name": "raid_bdev1", 00:20:48.289 "raid_level": "raid1", 00:20:48.289 "base_bdevs": [ 00:20:48.289 "malloc1", 00:20:48.289 "malloc2" 00:20:48.289 ], 00:20:48.289 "superblock": false, 00:20:48.289 "method": "bdev_raid_create", 00:20:48.289 "req_id": 1 00:20:48.289 } 00:20:48.289 Got JSON-RPC error response 00:20:48.289 response: 00:20:48.289 { 00:20:48.289 "code": -17, 00:20:48.549 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:48.549 } 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 [2024-09-27 22:37:44.221067] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:48.549 [2024-09-27 22:37:44.221120] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.549 [2024-09-27 22:37:44.221141] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:48.549 [2024-09-27 22:37:44.221154] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.549 [2024-09-27 22:37:44.223555] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.549 [2024-09-27 22:37:44.223598] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:48.549 [2024-09-27 22:37:44.223669] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:48.549 [2024-09-27 22:37:44.223728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:48.549 pt1 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.549 "name": "raid_bdev1", 00:20:48.549 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:48.549 "strip_size_kb": 0, 00:20:48.549 "state": "configuring", 00:20:48.549 "raid_level": "raid1", 00:20:48.549 "superblock": true, 00:20:48.549 "num_base_bdevs": 2, 00:20:48.549 "num_base_bdevs_discovered": 1, 00:20:48.549 "num_base_bdevs_operational": 2, 00:20:48.549 "base_bdevs_list": [ 00:20:48.549 { 00:20:48.549 "name": "pt1", 00:20:48.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:48.549 "is_configured": true, 00:20:48.549 "data_offset": 256, 00:20:48.549 "data_size": 7936 00:20:48.549 }, 00:20:48.549 { 00:20:48.549 "name": null, 00:20:48.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.549 "is_configured": false, 00:20:48.549 "data_offset": 256, 00:20:48.549 "data_size": 7936 00:20:48.549 } 00:20:48.549 ] 00:20:48.549 }' 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.549 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.808 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:48.808 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:48.808 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:48.808 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:48.808 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.808 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.808 [2024-09-27 22:37:44.648477] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:48.808 [2024-09-27 22:37:44.648682] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.808 [2024-09-27 22:37:44.648741] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:48.808 [2024-09-27 22:37:44.648958] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.808 [2024-09-27 22:37:44.649463] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.809 [2024-09-27 22:37:44.649496] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:48.809 [2024-09-27 22:37:44.649581] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:48.809 [2024-09-27 22:37:44.649606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:48.809 [2024-09-27 22:37:44.649724] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:48.809 [2024-09-27 22:37:44.649742] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:48.809 [2024-09-27 22:37:44.650002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:48.809 [2024-09-27 22:37:44.650157] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:48.809 [2024-09-27 22:37:44.650167] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:48.809 [2024-09-27 22:37:44.650300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.809 pt2 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.809 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.067 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.067 "name": "raid_bdev1", 00:20:49.067 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:49.067 "strip_size_kb": 0, 00:20:49.067 "state": "online", 00:20:49.067 "raid_level": "raid1", 00:20:49.067 "superblock": true, 00:20:49.067 "num_base_bdevs": 2, 00:20:49.067 "num_base_bdevs_discovered": 2, 00:20:49.067 "num_base_bdevs_operational": 2, 00:20:49.067 "base_bdevs_list": [ 00:20:49.067 { 00:20:49.067 "name": "pt1", 00:20:49.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:49.067 "is_configured": true, 00:20:49.067 "data_offset": 256, 00:20:49.067 "data_size": 7936 00:20:49.067 }, 00:20:49.067 { 00:20:49.067 "name": "pt2", 00:20:49.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.067 "is_configured": true, 00:20:49.067 "data_offset": 256, 00:20:49.067 "data_size": 7936 00:20:49.067 } 00:20:49.067 ] 00:20:49.067 }' 00:20:49.067 22:37:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.067 22:37:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:49.326 [2024-09-27 22:37:45.084321] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.326 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:49.326 "name": "raid_bdev1", 00:20:49.326 "aliases": [ 00:20:49.327 "72a5c429-c367-4204-87d4-ddadde50a5c7" 00:20:49.327 ], 00:20:49.327 "product_name": "Raid Volume", 00:20:49.327 "block_size": 4096, 00:20:49.327 "num_blocks": 7936, 00:20:49.327 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:49.327 "assigned_rate_limits": { 00:20:49.327 "rw_ios_per_sec": 0, 00:20:49.327 "rw_mbytes_per_sec": 0, 00:20:49.327 "r_mbytes_per_sec": 0, 00:20:49.327 "w_mbytes_per_sec": 0 00:20:49.327 }, 00:20:49.327 "claimed": false, 00:20:49.327 "zoned": false, 00:20:49.327 "supported_io_types": { 00:20:49.327 "read": true, 00:20:49.327 "write": true, 00:20:49.327 "unmap": false, 00:20:49.327 "flush": false, 00:20:49.327 "reset": true, 00:20:49.327 "nvme_admin": false, 00:20:49.327 "nvme_io": false, 00:20:49.327 "nvme_io_md": false, 00:20:49.327 "write_zeroes": true, 00:20:49.327 "zcopy": false, 00:20:49.327 "get_zone_info": false, 00:20:49.327 "zone_management": false, 00:20:49.327 "zone_append": false, 00:20:49.327 "compare": false, 00:20:49.327 "compare_and_write": false, 00:20:49.327 "abort": false, 00:20:49.327 "seek_hole": false, 00:20:49.327 "seek_data": false, 00:20:49.327 "copy": false, 00:20:49.327 "nvme_iov_md": false 00:20:49.327 }, 00:20:49.327 "memory_domains": [ 00:20:49.327 { 00:20:49.327 "dma_device_id": "system", 00:20:49.327 "dma_device_type": 1 00:20:49.327 }, 00:20:49.327 { 00:20:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.327 "dma_device_type": 2 00:20:49.327 }, 00:20:49.327 { 00:20:49.327 "dma_device_id": "system", 00:20:49.327 "dma_device_type": 1 00:20:49.327 }, 00:20:49.327 { 00:20:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.327 "dma_device_type": 2 00:20:49.327 } 00:20:49.327 ], 00:20:49.327 "driver_specific": { 00:20:49.327 "raid": { 00:20:49.327 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:49.327 "strip_size_kb": 0, 00:20:49.327 "state": "online", 00:20:49.327 "raid_level": "raid1", 00:20:49.327 "superblock": true, 00:20:49.327 "num_base_bdevs": 2, 00:20:49.327 "num_base_bdevs_discovered": 2, 00:20:49.327 "num_base_bdevs_operational": 2, 00:20:49.327 "base_bdevs_list": [ 00:20:49.327 { 00:20:49.327 "name": "pt1", 00:20:49.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:49.327 "is_configured": true, 00:20:49.327 "data_offset": 256, 00:20:49.327 "data_size": 7936 00:20:49.327 }, 00:20:49.327 { 00:20:49.327 "name": "pt2", 00:20:49.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.327 "is_configured": true, 00:20:49.327 "data_offset": 256, 00:20:49.327 "data_size": 7936 00:20:49.327 } 00:20:49.327 ] 00:20:49.327 } 00:20:49.327 } 00:20:49.327 }' 00:20:49.327 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:49.327 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:49.327 pt2' 00:20:49.327 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:49.595 [2024-09-27 22:37:45.311953] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 72a5c429-c367-4204-87d4-ddadde50a5c7 '!=' 72a5c429-c367-4204-87d4-ddadde50a5c7 ']' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 [2024-09-27 22:37:45.355701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.595 "name": "raid_bdev1", 00:20:49.595 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:49.595 "strip_size_kb": 0, 00:20:49.595 "state": "online", 00:20:49.595 "raid_level": "raid1", 00:20:49.595 "superblock": true, 00:20:49.595 "num_base_bdevs": 2, 00:20:49.595 "num_base_bdevs_discovered": 1, 00:20:49.595 "num_base_bdevs_operational": 1, 00:20:49.595 "base_bdevs_list": [ 00:20:49.595 { 00:20:49.595 "name": null, 00:20:49.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.595 "is_configured": false, 00:20:49.595 "data_offset": 0, 00:20:49.595 "data_size": 7936 00:20:49.595 }, 00:20:49.595 { 00:20:49.595 "name": "pt2", 00:20:49.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.595 "is_configured": true, 00:20:49.595 "data_offset": 256, 00:20:49.595 "data_size": 7936 00:20:49.595 } 00:20:49.595 ] 00:20:49.595 }' 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.595 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 [2024-09-27 22:37:45.751108] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.163 [2024-09-27 22:37:45.751137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.163 [2024-09-27 22:37:45.751211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.163 [2024-09-27 22:37:45.751256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.163 [2024-09-27 22:37:45.751270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 [2024-09-27 22:37:45.823076] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:50.163 [2024-09-27 22:37:45.823133] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.163 [2024-09-27 22:37:45.823152] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:50.163 [2024-09-27 22:37:45.823166] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.163 [2024-09-27 22:37:45.825637] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.163 [2024-09-27 22:37:45.825681] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:50.163 [2024-09-27 22:37:45.825762] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:50.163 [2024-09-27 22:37:45.825814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:50.163 [2024-09-27 22:37:45.825909] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:50.163 [2024-09-27 22:37:45.825924] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:50.163 [2024-09-27 22:37:45.826182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:50.163 [2024-09-27 22:37:45.826392] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:50.163 [2024-09-27 22:37:45.826407] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:50.163 [2024-09-27 22:37:45.826561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.163 pt2 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.163 "name": "raid_bdev1", 00:20:50.163 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:50.163 "strip_size_kb": 0, 00:20:50.163 "state": "online", 00:20:50.163 "raid_level": "raid1", 00:20:50.163 "superblock": true, 00:20:50.163 "num_base_bdevs": 2, 00:20:50.163 "num_base_bdevs_discovered": 1, 00:20:50.163 "num_base_bdevs_operational": 1, 00:20:50.163 "base_bdevs_list": [ 00:20:50.163 { 00:20:50.163 "name": null, 00:20:50.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.163 "is_configured": false, 00:20:50.163 "data_offset": 256, 00:20:50.163 "data_size": 7936 00:20:50.163 }, 00:20:50.163 { 00:20:50.163 "name": "pt2", 00:20:50.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:50.163 "is_configured": true, 00:20:50.163 "data_offset": 256, 00:20:50.163 "data_size": 7936 00:20:50.163 } 00:20:50.163 ] 00:20:50.163 }' 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.163 22:37:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.422 [2024-09-27 22:37:46.238435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.422 [2024-09-27 22:37:46.238585] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.422 [2024-09-27 22:37:46.238723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.422 [2024-09-27 22:37:46.238800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.422 [2024-09-27 22:37:46.238911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.422 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.422 [2024-09-27 22:37:46.298365] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:50.422 [2024-09-27 22:37:46.298425] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.422 [2024-09-27 22:37:46.298448] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:50.422 [2024-09-27 22:37:46.298460] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.681 [2024-09-27 22:37:46.300892] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.681 [2024-09-27 22:37:46.300935] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:50.681 [2024-09-27 22:37:46.301034] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:50.681 [2024-09-27 22:37:46.301086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:50.681 [2024-09-27 22:37:46.301211] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:50.681 [2024-09-27 22:37:46.301227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.681 [2024-09-27 22:37:46.301247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:50.681 [2024-09-27 22:37:46.301312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:50.681 [2024-09-27 22:37:46.301392] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:50.681 [2024-09-27 22:37:46.301402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:50.681 [2024-09-27 22:37:46.301646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:50.681 [2024-09-27 22:37:46.301789] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:50.681 [2024-09-27 22:37:46.301802] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:50.681 [2024-09-27 22:37:46.301943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.681 pt1 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.681 "name": "raid_bdev1", 00:20:50.681 "uuid": "72a5c429-c367-4204-87d4-ddadde50a5c7", 00:20:50.681 "strip_size_kb": 0, 00:20:50.681 "state": "online", 00:20:50.681 "raid_level": "raid1", 00:20:50.681 "superblock": true, 00:20:50.681 "num_base_bdevs": 2, 00:20:50.681 "num_base_bdevs_discovered": 1, 00:20:50.681 "num_base_bdevs_operational": 1, 00:20:50.681 "base_bdevs_list": [ 00:20:50.681 { 00:20:50.681 "name": null, 00:20:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.681 "is_configured": false, 00:20:50.681 "data_offset": 256, 00:20:50.681 "data_size": 7936 00:20:50.681 }, 00:20:50.681 { 00:20:50.681 "name": "pt2", 00:20:50.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:50.681 "is_configured": true, 00:20:50.681 "data_offset": 256, 00:20:50.681 "data_size": 7936 00:20:50.681 } 00:20:50.681 ] 00:20:50.681 }' 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.681 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:50.940 [2024-09-27 22:37:46.793870] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.940 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 72a5c429-c367-4204-87d4-ddadde50a5c7 '!=' 72a5c429-c367-4204-87d4-ddadde50a5c7 ']' 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87338 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 87338 ']' 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 87338 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87338 00:20:51.198 killing process with pid 87338 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87338' 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 87338 00:20:51.198 [2024-09-27 22:37:46.862725] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.198 [2024-09-27 22:37:46.862803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.198 [2024-09-27 22:37:46.862846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.198 [2024-09-27 22:37:46.862863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:51.198 22:37:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 87338 00:20:51.198 [2024-09-27 22:37:47.073499] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.734 ************************************ 00:20:53.734 END TEST raid_superblock_test_4k 00:20:53.734 ************************************ 00:20:53.734 22:37:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:53.734 00:20:53.734 real 0m6.945s 00:20:53.734 user 0m9.808s 00:20:53.734 sys 0m1.200s 00:20:53.734 22:37:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:53.734 22:37:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.734 22:37:49 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:53.734 22:37:49 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:53.734 22:37:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:53.734 22:37:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.734 22:37:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:53.734 ************************************ 00:20:53.734 START TEST raid_rebuild_test_sb_4k 00:20:53.734 ************************************ 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87672 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87672 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 87672 ']' 00:20:53.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.734 22:37:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.734 [2024-09-27 22:37:49.191727] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:20:53.734 [2024-09-27 22:37:49.192052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87672 ] 00:20:53.734 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:53.734 Zero copy mechanism will not be used. 00:20:53.734 [2024-09-27 22:37:49.357116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.734 [2024-09-27 22:37:49.584261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.993 [2024-09-27 22:37:49.823585] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.993 [2024-09-27 22:37:49.823762] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.599 BaseBdev1_malloc 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.599 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.599 [2024-09-27 22:37:50.344749] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:54.599 [2024-09-27 22:37:50.344959] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.599 [2024-09-27 22:37:50.345033] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:54.599 [2024-09-27 22:37:50.345129] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.599 [2024-09-27 22:37:50.347569] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.600 [2024-09-27 22:37:50.347716] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.600 BaseBdev1 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.600 BaseBdev2_malloc 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.600 [2024-09-27 22:37:50.407363] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:54.600 [2024-09-27 22:37:50.407431] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.600 [2024-09-27 22:37:50.407457] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:54.600 [2024-09-27 22:37:50.407471] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.600 [2024-09-27 22:37:50.409833] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.600 [2024-09-27 22:37:50.409876] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:54.600 BaseBdev2 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.600 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.861 spare_malloc 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.861 spare_delay 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.861 [2024-09-27 22:37:50.480116] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:54.861 [2024-09-27 22:37:50.480186] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.861 [2024-09-27 22:37:50.480207] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:54.861 [2024-09-27 22:37:50.480222] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.861 [2024-09-27 22:37:50.482546] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.861 [2024-09-27 22:37:50.482588] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:54.861 spare 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.861 [2024-09-27 22:37:50.492157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.861 [2024-09-27 22:37:50.494213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.861 [2024-09-27 22:37:50.494413] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:54.861 [2024-09-27 22:37:50.494430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:54.861 [2024-09-27 22:37:50.494706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:54.861 [2024-09-27 22:37:50.494868] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:54.861 [2024-09-27 22:37:50.494878] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:54.861 [2024-09-27 22:37:50.495041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.861 "name": "raid_bdev1", 00:20:54.861 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:54.861 "strip_size_kb": 0, 00:20:54.861 "state": "online", 00:20:54.861 "raid_level": "raid1", 00:20:54.861 "superblock": true, 00:20:54.861 "num_base_bdevs": 2, 00:20:54.861 "num_base_bdevs_discovered": 2, 00:20:54.861 "num_base_bdevs_operational": 2, 00:20:54.861 "base_bdevs_list": [ 00:20:54.861 { 00:20:54.861 "name": "BaseBdev1", 00:20:54.861 "uuid": "7d218b0e-fdfb-556e-a423-ef602fab1e37", 00:20:54.861 "is_configured": true, 00:20:54.861 "data_offset": 256, 00:20:54.861 "data_size": 7936 00:20:54.861 }, 00:20:54.861 { 00:20:54.861 "name": "BaseBdev2", 00:20:54.861 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:54.861 "is_configured": true, 00:20:54.861 "data_offset": 256, 00:20:54.861 "data_size": 7936 00:20:54.861 } 00:20:54.861 ] 00:20:54.861 }' 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.861 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.121 [2024-09-27 22:37:50.880346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:55.121 22:37:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:55.380 [2024-09-27 22:37:51.155733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:55.380 /dev/nbd0 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:55.380 1+0 records in 00:20:55.380 1+0 records out 00:20:55.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488867 s, 8.4 MB/s 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:55.380 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:56.318 7936+0 records in 00:20:56.318 7936+0 records out 00:20:56.318 32505856 bytes (33 MB, 31 MiB) copied, 0.705915 s, 46.0 MB/s 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:56.318 22:37:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:56.318 [2024-09-27 22:37:52.163542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.318 [2024-09-27 22:37:52.184347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.318 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.577 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.577 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.577 "name": "raid_bdev1", 00:20:56.577 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:56.577 "strip_size_kb": 0, 00:20:56.577 "state": "online", 00:20:56.577 "raid_level": "raid1", 00:20:56.577 "superblock": true, 00:20:56.577 "num_base_bdevs": 2, 00:20:56.577 "num_base_bdevs_discovered": 1, 00:20:56.577 "num_base_bdevs_operational": 1, 00:20:56.577 "base_bdevs_list": [ 00:20:56.577 { 00:20:56.577 "name": null, 00:20:56.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.577 "is_configured": false, 00:20:56.577 "data_offset": 0, 00:20:56.577 "data_size": 7936 00:20:56.577 }, 00:20:56.577 { 00:20:56.577 "name": "BaseBdev2", 00:20:56.577 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:56.577 "is_configured": true, 00:20:56.577 "data_offset": 256, 00:20:56.577 "data_size": 7936 00:20:56.577 } 00:20:56.578 ] 00:20:56.578 }' 00:20:56.578 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.578 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.837 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:56.837 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.837 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.837 [2024-09-27 22:37:52.604154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.837 [2024-09-27 22:37:52.621508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:56.837 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.837 22:37:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:56.837 [2024-09-27 22:37:52.623716] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.776 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.036 "name": "raid_bdev1", 00:20:58.036 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:58.036 "strip_size_kb": 0, 00:20:58.036 "state": "online", 00:20:58.036 "raid_level": "raid1", 00:20:58.036 "superblock": true, 00:20:58.036 "num_base_bdevs": 2, 00:20:58.036 "num_base_bdevs_discovered": 2, 00:20:58.036 "num_base_bdevs_operational": 2, 00:20:58.036 "process": { 00:20:58.036 "type": "rebuild", 00:20:58.036 "target": "spare", 00:20:58.036 "progress": { 00:20:58.036 "blocks": 2560, 00:20:58.036 "percent": 32 00:20:58.036 } 00:20:58.036 }, 00:20:58.036 "base_bdevs_list": [ 00:20:58.036 { 00:20:58.036 "name": "spare", 00:20:58.036 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:20:58.036 "is_configured": true, 00:20:58.036 "data_offset": 256, 00:20:58.036 "data_size": 7936 00:20:58.036 }, 00:20:58.036 { 00:20:58.036 "name": "BaseBdev2", 00:20:58.036 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:58.036 "is_configured": true, 00:20:58.036 "data_offset": 256, 00:20:58.036 "data_size": 7936 00:20:58.036 } 00:20:58.036 ] 00:20:58.036 }' 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.036 [2024-09-27 22:37:53.752130] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:58.036 [2024-09-27 22:37:53.828879] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:58.036 [2024-09-27 22:37:53.828947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.036 [2024-09-27 22:37:53.828981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:58.036 [2024-09-27 22:37:53.829010] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.036 "name": "raid_bdev1", 00:20:58.036 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:58.036 "strip_size_kb": 0, 00:20:58.036 "state": "online", 00:20:58.036 "raid_level": "raid1", 00:20:58.036 "superblock": true, 00:20:58.036 "num_base_bdevs": 2, 00:20:58.036 "num_base_bdevs_discovered": 1, 00:20:58.036 "num_base_bdevs_operational": 1, 00:20:58.036 "base_bdevs_list": [ 00:20:58.036 { 00:20:58.036 "name": null, 00:20:58.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.036 "is_configured": false, 00:20:58.036 "data_offset": 0, 00:20:58.036 "data_size": 7936 00:20:58.036 }, 00:20:58.036 { 00:20:58.036 "name": "BaseBdev2", 00:20:58.036 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:58.036 "is_configured": true, 00:20:58.036 "data_offset": 256, 00:20:58.036 "data_size": 7936 00:20:58.036 } 00:20:58.036 ] 00:20:58.036 }' 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.036 22:37:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.604 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.605 "name": "raid_bdev1", 00:20:58.605 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:58.605 "strip_size_kb": 0, 00:20:58.605 "state": "online", 00:20:58.605 "raid_level": "raid1", 00:20:58.605 "superblock": true, 00:20:58.605 "num_base_bdevs": 2, 00:20:58.605 "num_base_bdevs_discovered": 1, 00:20:58.605 "num_base_bdevs_operational": 1, 00:20:58.605 "base_bdevs_list": [ 00:20:58.605 { 00:20:58.605 "name": null, 00:20:58.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.605 "is_configured": false, 00:20:58.605 "data_offset": 0, 00:20:58.605 "data_size": 7936 00:20:58.605 }, 00:20:58.605 { 00:20:58.605 "name": "BaseBdev2", 00:20:58.605 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:58.605 "is_configured": true, 00:20:58.605 "data_offset": 256, 00:20:58.605 "data_size": 7936 00:20:58.605 } 00:20:58.605 ] 00:20:58.605 }' 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.605 [2024-09-27 22:37:54.412176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.605 [2024-09-27 22:37:54.428882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.605 22:37:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:58.605 [2024-09-27 22:37:54.431008] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.983 "name": "raid_bdev1", 00:20:59.983 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:59.983 "strip_size_kb": 0, 00:20:59.983 "state": "online", 00:20:59.983 "raid_level": "raid1", 00:20:59.983 "superblock": true, 00:20:59.983 "num_base_bdevs": 2, 00:20:59.983 "num_base_bdevs_discovered": 2, 00:20:59.983 "num_base_bdevs_operational": 2, 00:20:59.983 "process": { 00:20:59.983 "type": "rebuild", 00:20:59.983 "target": "spare", 00:20:59.983 "progress": { 00:20:59.983 "blocks": 2560, 00:20:59.983 "percent": 32 00:20:59.983 } 00:20:59.983 }, 00:20:59.983 "base_bdevs_list": [ 00:20:59.983 { 00:20:59.983 "name": "spare", 00:20:59.983 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:20:59.983 "is_configured": true, 00:20:59.983 "data_offset": 256, 00:20:59.983 "data_size": 7936 00:20:59.983 }, 00:20:59.983 { 00:20:59.983 "name": "BaseBdev2", 00:20:59.983 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:59.983 "is_configured": true, 00:20:59.983 "data_offset": 256, 00:20:59.983 "data_size": 7936 00:20:59.983 } 00:20:59.983 ] 00:20:59.983 }' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:59.983 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=779 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.983 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.983 "name": "raid_bdev1", 00:20:59.983 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:20:59.983 "strip_size_kb": 0, 00:20:59.983 "state": "online", 00:20:59.983 "raid_level": "raid1", 00:20:59.983 "superblock": true, 00:20:59.983 "num_base_bdevs": 2, 00:20:59.983 "num_base_bdevs_discovered": 2, 00:20:59.983 "num_base_bdevs_operational": 2, 00:20:59.983 "process": { 00:20:59.983 "type": "rebuild", 00:20:59.983 "target": "spare", 00:20:59.983 "progress": { 00:20:59.983 "blocks": 2816, 00:20:59.983 "percent": 35 00:20:59.983 } 00:20:59.984 }, 00:20:59.984 "base_bdevs_list": [ 00:20:59.984 { 00:20:59.984 "name": "spare", 00:20:59.984 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:20:59.984 "is_configured": true, 00:20:59.984 "data_offset": 256, 00:20:59.984 "data_size": 7936 00:20:59.984 }, 00:20:59.984 { 00:20:59.984 "name": "BaseBdev2", 00:20:59.984 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:20:59.984 "is_configured": true, 00:20:59.984 "data_offset": 256, 00:20:59.984 "data_size": 7936 00:20:59.984 } 00:20:59.984 ] 00:20:59.984 }' 00:20:59.984 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.984 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.984 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.984 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.984 22:37:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.918 "name": "raid_bdev1", 00:21:00.918 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:00.918 "strip_size_kb": 0, 00:21:00.918 "state": "online", 00:21:00.918 "raid_level": "raid1", 00:21:00.918 "superblock": true, 00:21:00.918 "num_base_bdevs": 2, 00:21:00.918 "num_base_bdevs_discovered": 2, 00:21:00.918 "num_base_bdevs_operational": 2, 00:21:00.918 "process": { 00:21:00.918 "type": "rebuild", 00:21:00.918 "target": "spare", 00:21:00.918 "progress": { 00:21:00.918 "blocks": 5632, 00:21:00.918 "percent": 70 00:21:00.918 } 00:21:00.918 }, 00:21:00.918 "base_bdevs_list": [ 00:21:00.918 { 00:21:00.918 "name": "spare", 00:21:00.918 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:00.918 "is_configured": true, 00:21:00.918 "data_offset": 256, 00:21:00.918 "data_size": 7936 00:21:00.918 }, 00:21:00.918 { 00:21:00.918 "name": "BaseBdev2", 00:21:00.918 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:00.918 "is_configured": true, 00:21:00.918 "data_offset": 256, 00:21:00.918 "data_size": 7936 00:21:00.918 } 00:21:00.918 ] 00:21:00.918 }' 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.918 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.176 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.176 22:37:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:01.741 [2024-09-27 22:37:57.544004] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:01.741 [2024-09-27 22:37:57.544100] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:01.741 [2024-09-27 22:37:57.544229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.998 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.257 "name": "raid_bdev1", 00:21:02.257 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:02.257 "strip_size_kb": 0, 00:21:02.257 "state": "online", 00:21:02.257 "raid_level": "raid1", 00:21:02.257 "superblock": true, 00:21:02.257 "num_base_bdevs": 2, 00:21:02.257 "num_base_bdevs_discovered": 2, 00:21:02.257 "num_base_bdevs_operational": 2, 00:21:02.257 "base_bdevs_list": [ 00:21:02.257 { 00:21:02.257 "name": "spare", 00:21:02.257 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:02.257 "is_configured": true, 00:21:02.257 "data_offset": 256, 00:21:02.257 "data_size": 7936 00:21:02.257 }, 00:21:02.257 { 00:21:02.257 "name": "BaseBdev2", 00:21:02.257 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:02.257 "is_configured": true, 00:21:02.257 "data_offset": 256, 00:21:02.257 "data_size": 7936 00:21:02.257 } 00:21:02.257 ] 00:21:02.257 }' 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.257 22:37:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.257 "name": "raid_bdev1", 00:21:02.257 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:02.257 "strip_size_kb": 0, 00:21:02.257 "state": "online", 00:21:02.257 "raid_level": "raid1", 00:21:02.257 "superblock": true, 00:21:02.257 "num_base_bdevs": 2, 00:21:02.257 "num_base_bdevs_discovered": 2, 00:21:02.257 "num_base_bdevs_operational": 2, 00:21:02.257 "base_bdevs_list": [ 00:21:02.257 { 00:21:02.257 "name": "spare", 00:21:02.257 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:02.257 "is_configured": true, 00:21:02.257 "data_offset": 256, 00:21:02.257 "data_size": 7936 00:21:02.257 }, 00:21:02.257 { 00:21:02.257 "name": "BaseBdev2", 00:21:02.257 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:02.257 "is_configured": true, 00:21:02.257 "data_offset": 256, 00:21:02.257 "data_size": 7936 00:21:02.257 } 00:21:02.257 ] 00:21:02.257 }' 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.257 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.516 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.516 "name": "raid_bdev1", 00:21:02.516 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:02.516 "strip_size_kb": 0, 00:21:02.516 "state": "online", 00:21:02.516 "raid_level": "raid1", 00:21:02.516 "superblock": true, 00:21:02.516 "num_base_bdevs": 2, 00:21:02.516 "num_base_bdevs_discovered": 2, 00:21:02.516 "num_base_bdevs_operational": 2, 00:21:02.516 "base_bdevs_list": [ 00:21:02.516 { 00:21:02.516 "name": "spare", 00:21:02.516 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:02.516 "is_configured": true, 00:21:02.516 "data_offset": 256, 00:21:02.516 "data_size": 7936 00:21:02.516 }, 00:21:02.516 { 00:21:02.516 "name": "BaseBdev2", 00:21:02.516 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:02.516 "is_configured": true, 00:21:02.516 "data_offset": 256, 00:21:02.516 "data_size": 7936 00:21:02.516 } 00:21:02.516 ] 00:21:02.516 }' 00:21:02.516 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.516 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.776 [2024-09-27 22:37:58.528147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.776 [2024-09-27 22:37:58.528297] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.776 [2024-09-27 22:37:58.528457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.776 [2024-09-27 22:37:58.528555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.776 [2024-09-27 22:37:58.528781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:02.776 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:03.035 /dev/nbd0 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.035 1+0 records in 00:21:03.035 1+0 records out 00:21:03.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373798 s, 11.0 MB/s 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:03.035 22:37:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:03.294 /dev/nbd1 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.294 1+0 records in 00:21:03.294 1+0 records out 00:21:03.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420766 s, 9.7 MB/s 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:03.294 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.553 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.812 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.071 [2024-09-27 22:37:59.752340] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:04.071 [2024-09-27 22:37:59.752399] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.071 [2024-09-27 22:37:59.752427] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:04.071 [2024-09-27 22:37:59.752438] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.071 [2024-09-27 22:37:59.754895] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.071 [2024-09-27 22:37:59.755060] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:04.071 [2024-09-27 22:37:59.755173] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:04.071 [2024-09-27 22:37:59.755229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.071 [2024-09-27 22:37:59.755375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.071 spare 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.071 [2024-09-27 22:37:59.855307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:04.071 [2024-09-27 22:37:59.855363] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:04.071 [2024-09-27 22:37:59.855712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:04.071 [2024-09-27 22:37:59.855919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:04.071 [2024-09-27 22:37:59.855930] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:04.071 [2024-09-27 22:37:59.856195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.071 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.071 "name": "raid_bdev1", 00:21:04.071 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:04.071 "strip_size_kb": 0, 00:21:04.071 "state": "online", 00:21:04.071 "raid_level": "raid1", 00:21:04.071 "superblock": true, 00:21:04.071 "num_base_bdevs": 2, 00:21:04.072 "num_base_bdevs_discovered": 2, 00:21:04.072 "num_base_bdevs_operational": 2, 00:21:04.072 "base_bdevs_list": [ 00:21:04.072 { 00:21:04.072 "name": "spare", 00:21:04.072 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:04.072 "is_configured": true, 00:21:04.072 "data_offset": 256, 00:21:04.072 "data_size": 7936 00:21:04.072 }, 00:21:04.072 { 00:21:04.072 "name": "BaseBdev2", 00:21:04.072 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:04.072 "is_configured": true, 00:21:04.072 "data_offset": 256, 00:21:04.072 "data_size": 7936 00:21:04.072 } 00:21:04.072 ] 00:21:04.072 }' 00:21:04.072 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.072 22:37:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.641 "name": "raid_bdev1", 00:21:04.641 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:04.641 "strip_size_kb": 0, 00:21:04.641 "state": "online", 00:21:04.641 "raid_level": "raid1", 00:21:04.641 "superblock": true, 00:21:04.641 "num_base_bdevs": 2, 00:21:04.641 "num_base_bdevs_discovered": 2, 00:21:04.641 "num_base_bdevs_operational": 2, 00:21:04.641 "base_bdevs_list": [ 00:21:04.641 { 00:21:04.641 "name": "spare", 00:21:04.641 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:04.641 "is_configured": true, 00:21:04.641 "data_offset": 256, 00:21:04.641 "data_size": 7936 00:21:04.641 }, 00:21:04.641 { 00:21:04.641 "name": "BaseBdev2", 00:21:04.641 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:04.641 "is_configured": true, 00:21:04.641 "data_offset": 256, 00:21:04.641 "data_size": 7936 00:21:04.641 } 00:21:04.641 ] 00:21:04.641 }' 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 [2024-09-27 22:38:00.484125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.641 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.900 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.900 "name": "raid_bdev1", 00:21:04.900 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:04.900 "strip_size_kb": 0, 00:21:04.900 "state": "online", 00:21:04.900 "raid_level": "raid1", 00:21:04.900 "superblock": true, 00:21:04.900 "num_base_bdevs": 2, 00:21:04.900 "num_base_bdevs_discovered": 1, 00:21:04.900 "num_base_bdevs_operational": 1, 00:21:04.900 "base_bdevs_list": [ 00:21:04.900 { 00:21:04.900 "name": null, 00:21:04.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.900 "is_configured": false, 00:21:04.900 "data_offset": 0, 00:21:04.900 "data_size": 7936 00:21:04.900 }, 00:21:04.900 { 00:21:04.900 "name": "BaseBdev2", 00:21:04.900 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:04.900 "is_configured": true, 00:21:04.900 "data_offset": 256, 00:21:04.900 "data_size": 7936 00:21:04.900 } 00:21:04.900 ] 00:21:04.900 }' 00:21:04.900 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.900 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.158 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.158 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.158 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:05.158 [2024-09-27 22:38:00.900161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.158 [2024-09-27 22:38:00.900354] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:05.158 [2024-09-27 22:38:00.900373] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:05.158 [2024-09-27 22:38:00.900417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.158 [2024-09-27 22:38:00.917074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:05.158 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.158 22:38:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:05.159 [2024-09-27 22:38:00.919195] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.094 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.094 "name": "raid_bdev1", 00:21:06.094 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:06.094 "strip_size_kb": 0, 00:21:06.094 "state": "online", 00:21:06.094 "raid_level": "raid1", 00:21:06.094 "superblock": true, 00:21:06.094 "num_base_bdevs": 2, 00:21:06.094 "num_base_bdevs_discovered": 2, 00:21:06.094 "num_base_bdevs_operational": 2, 00:21:06.094 "process": { 00:21:06.094 "type": "rebuild", 00:21:06.094 "target": "spare", 00:21:06.094 "progress": { 00:21:06.094 "blocks": 2560, 00:21:06.094 "percent": 32 00:21:06.094 } 00:21:06.094 }, 00:21:06.094 "base_bdevs_list": [ 00:21:06.094 { 00:21:06.094 "name": "spare", 00:21:06.094 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:06.094 "is_configured": true, 00:21:06.094 "data_offset": 256, 00:21:06.094 "data_size": 7936 00:21:06.094 }, 00:21:06.094 { 00:21:06.094 "name": "BaseBdev2", 00:21:06.094 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:06.094 "is_configured": true, 00:21:06.094 "data_offset": 256, 00:21:06.094 "data_size": 7936 00:21:06.094 } 00:21:06.094 ] 00:21:06.094 }' 00:21:06.353 22:38:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.353 [2024-09-27 22:38:02.067159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.353 [2024-09-27 22:38:02.124207] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.353 [2024-09-27 22:38:02.124274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.353 [2024-09-27 22:38:02.124289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.353 [2024-09-27 22:38:02.124301] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.353 "name": "raid_bdev1", 00:21:06.353 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:06.353 "strip_size_kb": 0, 00:21:06.353 "state": "online", 00:21:06.353 "raid_level": "raid1", 00:21:06.353 "superblock": true, 00:21:06.353 "num_base_bdevs": 2, 00:21:06.353 "num_base_bdevs_discovered": 1, 00:21:06.353 "num_base_bdevs_operational": 1, 00:21:06.353 "base_bdevs_list": [ 00:21:06.353 { 00:21:06.353 "name": null, 00:21:06.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.353 "is_configured": false, 00:21:06.353 "data_offset": 0, 00:21:06.353 "data_size": 7936 00:21:06.353 }, 00:21:06.353 { 00:21:06.353 "name": "BaseBdev2", 00:21:06.353 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:06.353 "is_configured": true, 00:21:06.353 "data_offset": 256, 00:21:06.353 "data_size": 7936 00:21:06.353 } 00:21:06.353 ] 00:21:06.353 }' 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.353 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.921 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.921 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.921 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:06.921 [2024-09-27 22:38:02.567448] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.921 [2024-09-27 22:38:02.567518] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.921 [2024-09-27 22:38:02.567541] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:06.921 [2024-09-27 22:38:02.567555] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.921 [2024-09-27 22:38:02.568071] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.921 [2024-09-27 22:38:02.568101] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.921 [2024-09-27 22:38:02.568198] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:06.921 [2024-09-27 22:38:02.568214] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:06.921 [2024-09-27 22:38:02.568225] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:06.921 [2024-09-27 22:38:02.568251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.921 [2024-09-27 22:38:02.586696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:06.921 spare 00:21:06.921 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.921 22:38:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:06.921 [2024-09-27 22:38:02.588811] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.857 "name": "raid_bdev1", 00:21:07.857 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:07.857 "strip_size_kb": 0, 00:21:07.857 "state": "online", 00:21:07.857 "raid_level": "raid1", 00:21:07.857 "superblock": true, 00:21:07.857 "num_base_bdevs": 2, 00:21:07.857 "num_base_bdevs_discovered": 2, 00:21:07.857 "num_base_bdevs_operational": 2, 00:21:07.857 "process": { 00:21:07.857 "type": "rebuild", 00:21:07.857 "target": "spare", 00:21:07.857 "progress": { 00:21:07.857 "blocks": 2560, 00:21:07.857 "percent": 32 00:21:07.857 } 00:21:07.857 }, 00:21:07.857 "base_bdevs_list": [ 00:21:07.857 { 00:21:07.857 "name": "spare", 00:21:07.857 "uuid": "4fe73757-1326-50b3-9861-3723347c480b", 00:21:07.857 "is_configured": true, 00:21:07.857 "data_offset": 256, 00:21:07.857 "data_size": 7936 00:21:07.857 }, 00:21:07.857 { 00:21:07.857 "name": "BaseBdev2", 00:21:07.857 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:07.857 "is_configured": true, 00:21:07.857 "data_offset": 256, 00:21:07.857 "data_size": 7936 00:21:07.857 } 00:21:07.857 ] 00:21:07.857 }' 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.857 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:07.857 [2024-09-27 22:38:03.724536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.115 [2024-09-27 22:38:03.793878] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:08.116 [2024-09-27 22:38:03.793935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.116 [2024-09-27 22:38:03.793954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:08.116 [2024-09-27 22:38:03.793962] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.116 "name": "raid_bdev1", 00:21:08.116 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:08.116 "strip_size_kb": 0, 00:21:08.116 "state": "online", 00:21:08.116 "raid_level": "raid1", 00:21:08.116 "superblock": true, 00:21:08.116 "num_base_bdevs": 2, 00:21:08.116 "num_base_bdevs_discovered": 1, 00:21:08.116 "num_base_bdevs_operational": 1, 00:21:08.116 "base_bdevs_list": [ 00:21:08.116 { 00:21:08.116 "name": null, 00:21:08.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.116 "is_configured": false, 00:21:08.116 "data_offset": 0, 00:21:08.116 "data_size": 7936 00:21:08.116 }, 00:21:08.116 { 00:21:08.116 "name": "BaseBdev2", 00:21:08.116 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:08.116 "is_configured": true, 00:21:08.116 "data_offset": 256, 00:21:08.116 "data_size": 7936 00:21:08.116 } 00:21:08.116 ] 00:21:08.116 }' 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.116 22:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.685 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.685 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.685 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.686 "name": "raid_bdev1", 00:21:08.686 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:08.686 "strip_size_kb": 0, 00:21:08.686 "state": "online", 00:21:08.686 "raid_level": "raid1", 00:21:08.686 "superblock": true, 00:21:08.686 "num_base_bdevs": 2, 00:21:08.686 "num_base_bdevs_discovered": 1, 00:21:08.686 "num_base_bdevs_operational": 1, 00:21:08.686 "base_bdevs_list": [ 00:21:08.686 { 00:21:08.686 "name": null, 00:21:08.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.686 "is_configured": false, 00:21:08.686 "data_offset": 0, 00:21:08.686 "data_size": 7936 00:21:08.686 }, 00:21:08.686 { 00:21:08.686 "name": "BaseBdev2", 00:21:08.686 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:08.686 "is_configured": true, 00:21:08.686 "data_offset": 256, 00:21:08.686 "data_size": 7936 00:21:08.686 } 00:21:08.686 ] 00:21:08.686 }' 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:08.686 [2024-09-27 22:38:04.385091] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.686 [2024-09-27 22:38:04.385149] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.686 [2024-09-27 22:38:04.385181] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:08.686 [2024-09-27 22:38:04.385194] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.686 [2024-09-27 22:38:04.385665] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.686 [2024-09-27 22:38:04.385689] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.686 [2024-09-27 22:38:04.385779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:08.686 [2024-09-27 22:38:04.385803] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:08.686 [2024-09-27 22:38:04.385814] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:08.686 [2024-09-27 22:38:04.385826] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:08.686 BaseBdev1 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.686 22:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.639 "name": "raid_bdev1", 00:21:09.639 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:09.639 "strip_size_kb": 0, 00:21:09.639 "state": "online", 00:21:09.639 "raid_level": "raid1", 00:21:09.639 "superblock": true, 00:21:09.639 "num_base_bdevs": 2, 00:21:09.639 "num_base_bdevs_discovered": 1, 00:21:09.639 "num_base_bdevs_operational": 1, 00:21:09.639 "base_bdevs_list": [ 00:21:09.639 { 00:21:09.639 "name": null, 00:21:09.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.639 "is_configured": false, 00:21:09.639 "data_offset": 0, 00:21:09.639 "data_size": 7936 00:21:09.639 }, 00:21:09.639 { 00:21:09.639 "name": "BaseBdev2", 00:21:09.639 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:09.639 "is_configured": true, 00:21:09.639 "data_offset": 256, 00:21:09.639 "data_size": 7936 00:21:09.639 } 00:21:09.639 ] 00:21:09.639 }' 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.639 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.211 "name": "raid_bdev1", 00:21:10.211 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:10.211 "strip_size_kb": 0, 00:21:10.211 "state": "online", 00:21:10.211 "raid_level": "raid1", 00:21:10.211 "superblock": true, 00:21:10.211 "num_base_bdevs": 2, 00:21:10.211 "num_base_bdevs_discovered": 1, 00:21:10.211 "num_base_bdevs_operational": 1, 00:21:10.211 "base_bdevs_list": [ 00:21:10.211 { 00:21:10.211 "name": null, 00:21:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.211 "is_configured": false, 00:21:10.211 "data_offset": 0, 00:21:10.211 "data_size": 7936 00:21:10.211 }, 00:21:10.211 { 00:21:10.211 "name": "BaseBdev2", 00:21:10.211 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:10.211 "is_configured": true, 00:21:10.211 "data_offset": 256, 00:21:10.211 "data_size": 7936 00:21:10.211 } 00:21:10.211 ] 00:21:10.211 }' 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:10.211 [2024-09-27 22:38:05.936181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:10.211 [2024-09-27 22:38:05.936339] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:10.211 [2024-09-27 22:38:05.936356] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:10.211 request: 00:21:10.211 { 00:21:10.211 "base_bdev": "BaseBdev1", 00:21:10.211 "raid_bdev": "raid_bdev1", 00:21:10.211 "method": "bdev_raid_add_base_bdev", 00:21:10.211 "req_id": 1 00:21:10.211 } 00:21:10.211 Got JSON-RPC error response 00:21:10.211 response: 00:21:10.211 { 00:21:10.211 "code": -22, 00:21:10.211 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:10.211 } 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.211 22:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.149 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.150 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.150 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.150 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.150 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.150 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.150 22:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.150 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.150 "name": "raid_bdev1", 00:21:11.150 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:11.150 "strip_size_kb": 0, 00:21:11.150 "state": "online", 00:21:11.150 "raid_level": "raid1", 00:21:11.150 "superblock": true, 00:21:11.150 "num_base_bdevs": 2, 00:21:11.150 "num_base_bdevs_discovered": 1, 00:21:11.150 "num_base_bdevs_operational": 1, 00:21:11.150 "base_bdevs_list": [ 00:21:11.150 { 00:21:11.150 "name": null, 00:21:11.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.150 "is_configured": false, 00:21:11.150 "data_offset": 0, 00:21:11.150 "data_size": 7936 00:21:11.150 }, 00:21:11.150 { 00:21:11.150 "name": "BaseBdev2", 00:21:11.150 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:11.150 "is_configured": true, 00:21:11.150 "data_offset": 256, 00:21:11.150 "data_size": 7936 00:21:11.150 } 00:21:11.150 ] 00:21:11.150 }' 00:21:11.150 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.150 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.719 "name": "raid_bdev1", 00:21:11.719 "uuid": "e5454041-e127-47c6-923b-8927a8b0ff99", 00:21:11.719 "strip_size_kb": 0, 00:21:11.719 "state": "online", 00:21:11.719 "raid_level": "raid1", 00:21:11.719 "superblock": true, 00:21:11.719 "num_base_bdevs": 2, 00:21:11.719 "num_base_bdevs_discovered": 1, 00:21:11.719 "num_base_bdevs_operational": 1, 00:21:11.719 "base_bdevs_list": [ 00:21:11.719 { 00:21:11.719 "name": null, 00:21:11.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.719 "is_configured": false, 00:21:11.719 "data_offset": 0, 00:21:11.719 "data_size": 7936 00:21:11.719 }, 00:21:11.719 { 00:21:11.719 "name": "BaseBdev2", 00:21:11.719 "uuid": "055f22bb-3cf3-501d-8b00-dee5daaec1e4", 00:21:11.719 "is_configured": true, 00:21:11.719 "data_offset": 256, 00:21:11.719 "data_size": 7936 00:21:11.719 } 00:21:11.719 ] 00:21:11.719 }' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87672 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 87672 ']' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 87672 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87672 00:21:11.719 killing process with pid 87672 00:21:11.719 Received shutdown signal, test time was about 60.000000 seconds 00:21:11.719 00:21:11.719 Latency(us) 00:21:11.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.719 =================================================================================================================== 00:21:11.719 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87672' 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 87672 00:21:11.719 [2024-09-27 22:38:07.514404] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.719 22:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 87672 00:21:11.719 [2024-09-27 22:38:07.514539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.719 [2024-09-27 22:38:07.514590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.719 [2024-09-27 22:38:07.514604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:11.978 [2024-09-27 22:38:07.814957] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:13.885 ************************************ 00:21:13.885 END TEST raid_rebuild_test_sb_4k 00:21:13.885 ************************************ 00:21:13.885 22:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:21:13.885 00:21:13.885 real 0m20.640s 00:21:13.885 user 0m26.210s 00:21:13.885 sys 0m2.932s 00:21:13.885 22:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.885 22:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 22:38:09 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:21:14.144 22:38:09 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:14.144 22:38:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:14.144 22:38:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.144 22:38:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 ************************************ 00:21:14.144 START TEST raid_state_function_test_sb_md_separate 00:21:14.144 ************************************ 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:14.144 Process raid pid: 88370 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88370 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88370' 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88370 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88370 ']' 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.144 22:38:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 [2024-09-27 22:38:09.907937] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:21:14.144 [2024-09-27 22:38:09.908090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.404 [2024-09-27 22:38:10.079513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.663 [2024-09-27 22:38:10.299866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.663 [2024-09-27 22:38:10.531031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:14.663 [2024-09-27 22:38:10.531063] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.232 [2024-09-27 22:38:11.008634] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.232 [2024-09-27 22:38:11.008690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.232 [2024-09-27 22:38:11.008701] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.232 [2024-09-27 22:38:11.008714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.232 "name": "Existed_Raid", 00:21:15.232 "uuid": "ca7e728a-99aa-4272-bdd4-90c6d002fbbf", 00:21:15.232 "strip_size_kb": 0, 00:21:15.232 "state": "configuring", 00:21:15.232 "raid_level": "raid1", 00:21:15.232 "superblock": true, 00:21:15.232 "num_base_bdevs": 2, 00:21:15.232 "num_base_bdevs_discovered": 0, 00:21:15.232 "num_base_bdevs_operational": 2, 00:21:15.232 "base_bdevs_list": [ 00:21:15.232 { 00:21:15.232 "name": "BaseBdev1", 00:21:15.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.232 "is_configured": false, 00:21:15.232 "data_offset": 0, 00:21:15.232 "data_size": 0 00:21:15.232 }, 00:21:15.232 { 00:21:15.232 "name": "BaseBdev2", 00:21:15.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.232 "is_configured": false, 00:21:15.232 "data_offset": 0, 00:21:15.232 "data_size": 0 00:21:15.232 } 00:21:15.232 ] 00:21:15.232 }' 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.232 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 [2024-09-27 22:38:11.424140] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:15.801 [2024-09-27 22:38:11.424298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 [2024-09-27 22:38:11.432148] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.801 [2024-09-27 22:38:11.432301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.801 [2024-09-27 22:38:11.432320] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:15.801 [2024-09-27 22:38:11.432337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 [2024-09-27 22:38:11.481610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.801 BaseBdev1 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:15.801 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.802 [ 00:21:15.802 { 00:21:15.802 "name": "BaseBdev1", 00:21:15.802 "aliases": [ 00:21:15.802 "9d261bed-1735-4c1f-a09f-89caf3579091" 00:21:15.802 ], 00:21:15.802 "product_name": "Malloc disk", 00:21:15.802 "block_size": 4096, 00:21:15.802 "num_blocks": 8192, 00:21:15.802 "uuid": "9d261bed-1735-4c1f-a09f-89caf3579091", 00:21:15.802 "md_size": 32, 00:21:15.802 "md_interleave": false, 00:21:15.802 "dif_type": 0, 00:21:15.802 "assigned_rate_limits": { 00:21:15.802 "rw_ios_per_sec": 0, 00:21:15.802 "rw_mbytes_per_sec": 0, 00:21:15.802 "r_mbytes_per_sec": 0, 00:21:15.802 "w_mbytes_per_sec": 0 00:21:15.802 }, 00:21:15.802 "claimed": true, 00:21:15.802 "claim_type": "exclusive_write", 00:21:15.802 "zoned": false, 00:21:15.802 "supported_io_types": { 00:21:15.802 "read": true, 00:21:15.802 "write": true, 00:21:15.802 "unmap": true, 00:21:15.802 "flush": true, 00:21:15.802 "reset": true, 00:21:15.802 "nvme_admin": false, 00:21:15.802 "nvme_io": false, 00:21:15.802 "nvme_io_md": false, 00:21:15.802 "write_zeroes": true, 00:21:15.802 "zcopy": true, 00:21:15.802 "get_zone_info": false, 00:21:15.802 "zone_management": false, 00:21:15.802 "zone_append": false, 00:21:15.802 "compare": false, 00:21:15.802 "compare_and_write": false, 00:21:15.802 "abort": true, 00:21:15.802 "seek_hole": false, 00:21:15.802 "seek_data": false, 00:21:15.802 "copy": true, 00:21:15.802 "nvme_iov_md": false 00:21:15.802 }, 00:21:15.802 "memory_domains": [ 00:21:15.802 { 00:21:15.802 "dma_device_id": "system", 00:21:15.802 "dma_device_type": 1 00:21:15.802 }, 00:21:15.802 { 00:21:15.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.802 "dma_device_type": 2 00:21:15.802 } 00:21:15.802 ], 00:21:15.802 "driver_specific": {} 00:21:15.802 } 00:21:15.802 ] 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.802 "name": "Existed_Raid", 00:21:15.802 "uuid": "17baab7f-f617-4f6c-9325-ddf961a61de4", 00:21:15.802 "strip_size_kb": 0, 00:21:15.802 "state": "configuring", 00:21:15.802 "raid_level": "raid1", 00:21:15.802 "superblock": true, 00:21:15.802 "num_base_bdevs": 2, 00:21:15.802 "num_base_bdevs_discovered": 1, 00:21:15.802 "num_base_bdevs_operational": 2, 00:21:15.802 "base_bdevs_list": [ 00:21:15.802 { 00:21:15.802 "name": "BaseBdev1", 00:21:15.802 "uuid": "9d261bed-1735-4c1f-a09f-89caf3579091", 00:21:15.802 "is_configured": true, 00:21:15.802 "data_offset": 256, 00:21:15.802 "data_size": 7936 00:21:15.802 }, 00:21:15.802 { 00:21:15.802 "name": "BaseBdev2", 00:21:15.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.802 "is_configured": false, 00:21:15.802 "data_offset": 0, 00:21:15.802 "data_size": 0 00:21:15.802 } 00:21:15.802 ] 00:21:15.802 }' 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.802 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.061 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:16.061 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.061 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.321 [2024-09-27 22:38:11.941044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:16.321 [2024-09-27 22:38:11.941089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.321 [2024-09-27 22:38:11.949103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:16.321 [2024-09-27 22:38:11.951170] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.321 [2024-09-27 22:38:11.951214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:16.321 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.322 "name": "Existed_Raid", 00:21:16.322 "uuid": "4ebc8de0-85bc-454d-aa14-ca69d728cde1", 00:21:16.322 "strip_size_kb": 0, 00:21:16.322 "state": "configuring", 00:21:16.322 "raid_level": "raid1", 00:21:16.322 "superblock": true, 00:21:16.322 "num_base_bdevs": 2, 00:21:16.322 "num_base_bdevs_discovered": 1, 00:21:16.322 "num_base_bdevs_operational": 2, 00:21:16.322 "base_bdevs_list": [ 00:21:16.322 { 00:21:16.322 "name": "BaseBdev1", 00:21:16.322 "uuid": "9d261bed-1735-4c1f-a09f-89caf3579091", 00:21:16.322 "is_configured": true, 00:21:16.322 "data_offset": 256, 00:21:16.322 "data_size": 7936 00:21:16.322 }, 00:21:16.322 { 00:21:16.322 "name": "BaseBdev2", 00:21:16.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.322 "is_configured": false, 00:21:16.322 "data_offset": 0, 00:21:16.322 "data_size": 0 00:21:16.322 } 00:21:16.322 ] 00:21:16.322 }' 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.322 22:38:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.584 [2024-09-27 22:38:12.428024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.584 [2024-09-27 22:38:12.428436] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:16.584 [2024-09-27 22:38:12.428458] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:16.584 [2024-09-27 22:38:12.428545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:16.584 [2024-09-27 22:38:12.428649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:16.584 [2024-09-27 22:38:12.428662] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:16.584 [2024-09-27 22:38:12.428744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.584 BaseBdev2 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.584 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.584 [ 00:21:16.584 { 00:21:16.584 "name": "BaseBdev2", 00:21:16.584 "aliases": [ 00:21:16.584 "90b6d178-d737-4f40-b904-a120ec71c74a" 00:21:16.584 ], 00:21:16.584 "product_name": "Malloc disk", 00:21:16.584 "block_size": 4096, 00:21:16.584 "num_blocks": 8192, 00:21:16.584 "uuid": "90b6d178-d737-4f40-b904-a120ec71c74a", 00:21:16.584 "md_size": 32, 00:21:16.584 "md_interleave": false, 00:21:16.584 "dif_type": 0, 00:21:16.584 "assigned_rate_limits": { 00:21:16.584 "rw_ios_per_sec": 0, 00:21:16.584 "rw_mbytes_per_sec": 0, 00:21:16.584 "r_mbytes_per_sec": 0, 00:21:16.842 "w_mbytes_per_sec": 0 00:21:16.842 }, 00:21:16.842 "claimed": true, 00:21:16.842 "claim_type": "exclusive_write", 00:21:16.842 "zoned": false, 00:21:16.842 "supported_io_types": { 00:21:16.842 "read": true, 00:21:16.842 "write": true, 00:21:16.842 "unmap": true, 00:21:16.842 "flush": true, 00:21:16.842 "reset": true, 00:21:16.842 "nvme_admin": false, 00:21:16.842 "nvme_io": false, 00:21:16.842 "nvme_io_md": false, 00:21:16.842 "write_zeroes": true, 00:21:16.843 "zcopy": true, 00:21:16.843 "get_zone_info": false, 00:21:16.843 "zone_management": false, 00:21:16.843 "zone_append": false, 00:21:16.843 "compare": false, 00:21:16.843 "compare_and_write": false, 00:21:16.843 "abort": true, 00:21:16.843 "seek_hole": false, 00:21:16.843 "seek_data": false, 00:21:16.843 "copy": true, 00:21:16.843 "nvme_iov_md": false 00:21:16.843 }, 00:21:16.843 "memory_domains": [ 00:21:16.843 { 00:21:16.843 "dma_device_id": "system", 00:21:16.843 "dma_device_type": 1 00:21:16.843 }, 00:21:16.843 { 00:21:16.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.843 "dma_device_type": 2 00:21:16.843 } 00:21:16.843 ], 00:21:16.843 "driver_specific": {} 00:21:16.843 } 00:21:16.843 ] 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.843 "name": "Existed_Raid", 00:21:16.843 "uuid": "4ebc8de0-85bc-454d-aa14-ca69d728cde1", 00:21:16.843 "strip_size_kb": 0, 00:21:16.843 "state": "online", 00:21:16.843 "raid_level": "raid1", 00:21:16.843 "superblock": true, 00:21:16.843 "num_base_bdevs": 2, 00:21:16.843 "num_base_bdevs_discovered": 2, 00:21:16.843 "num_base_bdevs_operational": 2, 00:21:16.843 "base_bdevs_list": [ 00:21:16.843 { 00:21:16.843 "name": "BaseBdev1", 00:21:16.843 "uuid": "9d261bed-1735-4c1f-a09f-89caf3579091", 00:21:16.843 "is_configured": true, 00:21:16.843 "data_offset": 256, 00:21:16.843 "data_size": 7936 00:21:16.843 }, 00:21:16.843 { 00:21:16.843 "name": "BaseBdev2", 00:21:16.843 "uuid": "90b6d178-d737-4f40-b904-a120ec71c74a", 00:21:16.843 "is_configured": true, 00:21:16.843 "data_offset": 256, 00:21:16.843 "data_size": 7936 00:21:16.843 } 00:21:16.843 ] 00:21:16.843 }' 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.843 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:17.102 [2024-09-27 22:38:12.895624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.102 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.102 "name": "Existed_Raid", 00:21:17.102 "aliases": [ 00:21:17.102 "4ebc8de0-85bc-454d-aa14-ca69d728cde1" 00:21:17.102 ], 00:21:17.102 "product_name": "Raid Volume", 00:21:17.102 "block_size": 4096, 00:21:17.102 "num_blocks": 7936, 00:21:17.102 "uuid": "4ebc8de0-85bc-454d-aa14-ca69d728cde1", 00:21:17.102 "md_size": 32, 00:21:17.102 "md_interleave": false, 00:21:17.102 "dif_type": 0, 00:21:17.102 "assigned_rate_limits": { 00:21:17.102 "rw_ios_per_sec": 0, 00:21:17.102 "rw_mbytes_per_sec": 0, 00:21:17.102 "r_mbytes_per_sec": 0, 00:21:17.102 "w_mbytes_per_sec": 0 00:21:17.102 }, 00:21:17.102 "claimed": false, 00:21:17.102 "zoned": false, 00:21:17.102 "supported_io_types": { 00:21:17.102 "read": true, 00:21:17.102 "write": true, 00:21:17.102 "unmap": false, 00:21:17.102 "flush": false, 00:21:17.102 "reset": true, 00:21:17.102 "nvme_admin": false, 00:21:17.102 "nvme_io": false, 00:21:17.102 "nvme_io_md": false, 00:21:17.102 "write_zeroes": true, 00:21:17.102 "zcopy": false, 00:21:17.102 "get_zone_info": false, 00:21:17.102 "zone_management": false, 00:21:17.102 "zone_append": false, 00:21:17.102 "compare": false, 00:21:17.103 "compare_and_write": false, 00:21:17.103 "abort": false, 00:21:17.103 "seek_hole": false, 00:21:17.103 "seek_data": false, 00:21:17.103 "copy": false, 00:21:17.103 "nvme_iov_md": false 00:21:17.103 }, 00:21:17.103 "memory_domains": [ 00:21:17.103 { 00:21:17.103 "dma_device_id": "system", 00:21:17.103 "dma_device_type": 1 00:21:17.103 }, 00:21:17.103 { 00:21:17.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.103 "dma_device_type": 2 00:21:17.103 }, 00:21:17.103 { 00:21:17.103 "dma_device_id": "system", 00:21:17.103 "dma_device_type": 1 00:21:17.103 }, 00:21:17.103 { 00:21:17.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.103 "dma_device_type": 2 00:21:17.103 } 00:21:17.103 ], 00:21:17.103 "driver_specific": { 00:21:17.103 "raid": { 00:21:17.103 "uuid": "4ebc8de0-85bc-454d-aa14-ca69d728cde1", 00:21:17.103 "strip_size_kb": 0, 00:21:17.103 "state": "online", 00:21:17.103 "raid_level": "raid1", 00:21:17.103 "superblock": true, 00:21:17.103 "num_base_bdevs": 2, 00:21:17.103 "num_base_bdevs_discovered": 2, 00:21:17.103 "num_base_bdevs_operational": 2, 00:21:17.103 "base_bdevs_list": [ 00:21:17.103 { 00:21:17.103 "name": "BaseBdev1", 00:21:17.103 "uuid": "9d261bed-1735-4c1f-a09f-89caf3579091", 00:21:17.103 "is_configured": true, 00:21:17.103 "data_offset": 256, 00:21:17.103 "data_size": 7936 00:21:17.103 }, 00:21:17.103 { 00:21:17.103 "name": "BaseBdev2", 00:21:17.103 "uuid": "90b6d178-d737-4f40-b904-a120ec71c74a", 00:21:17.103 "is_configured": true, 00:21:17.103 "data_offset": 256, 00:21:17.103 "data_size": 7936 00:21:17.103 } 00:21:17.103 ] 00:21:17.103 } 00:21:17.103 } 00:21:17.103 }' 00:21:17.103 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:17.103 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:17.103 BaseBdev2' 00:21:17.103 22:38:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.362 [2024-09-27 22:38:13.107172] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.362 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.622 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.622 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.622 "name": "Existed_Raid", 00:21:17.622 "uuid": "4ebc8de0-85bc-454d-aa14-ca69d728cde1", 00:21:17.622 "strip_size_kb": 0, 00:21:17.622 "state": "online", 00:21:17.622 "raid_level": "raid1", 00:21:17.622 "superblock": true, 00:21:17.622 "num_base_bdevs": 2, 00:21:17.622 "num_base_bdevs_discovered": 1, 00:21:17.622 "num_base_bdevs_operational": 1, 00:21:17.622 "base_bdevs_list": [ 00:21:17.622 { 00:21:17.622 "name": null, 00:21:17.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.622 "is_configured": false, 00:21:17.622 "data_offset": 0, 00:21:17.622 "data_size": 7936 00:21:17.622 }, 00:21:17.622 { 00:21:17.622 "name": "BaseBdev2", 00:21:17.622 "uuid": "90b6d178-d737-4f40-b904-a120ec71c74a", 00:21:17.622 "is_configured": true, 00:21:17.622 "data_offset": 256, 00:21:17.622 "data_size": 7936 00:21:17.622 } 00:21:17.622 ] 00:21:17.622 }' 00:21:17.622 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.622 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.880 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.881 [2024-09-27 22:38:13.673417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:17.881 [2024-09-27 22:38:13.673513] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.139 [2024-09-27 22:38:13.775011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.139 [2024-09-27 22:38:13.775057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.139 [2024-09-27 22:38:13.775071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.139 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88370 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88370 ']' 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88370 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88370 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:18.140 killing process with pid 88370 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88370' 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88370 00:21:18.140 [2024-09-27 22:38:13.868892] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.140 22:38:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88370 00:21:18.140 [2024-09-27 22:38:13.886409] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:20.043 ************************************ 00:21:20.043 END TEST raid_state_function_test_sb_md_separate 00:21:20.043 ************************************ 00:21:20.043 22:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:20.043 00:21:20.043 real 0m5.933s 00:21:20.043 user 0m7.924s 00:21:20.043 sys 0m1.029s 00:21:20.043 22:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:20.043 22:38:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:20.043 22:38:15 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:20.043 22:38:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:20.043 22:38:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:20.043 22:38:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:20.043 ************************************ 00:21:20.043 START TEST raid_superblock_test_md_separate 00:21:20.043 ************************************ 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88627 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88627 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88627 ']' 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.043 22:38:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:20.043 [2024-09-27 22:38:15.912841] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:21:20.044 [2024-09-27 22:38:15.912965] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88627 ] 00:21:20.301 [2024-09-27 22:38:16.072901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.559 [2024-09-27 22:38:16.288691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.817 [2024-09-27 22:38:16.516700] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.817 [2024-09-27 22:38:16.516731] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.383 22:38:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.383 22:38:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:21:21.383 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:21.383 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:21.383 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.384 22:38:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 malloc1 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 [2024-09-27 22:38:17.044515] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:21.384 [2024-09-27 22:38:17.044690] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.384 [2024-09-27 22:38:17.044756] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:21.384 [2024-09-27 22:38:17.044849] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.384 [2024-09-27 22:38:17.047081] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.384 [2024-09-27 22:38:17.047216] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:21.384 pt1 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 malloc2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 [2024-09-27 22:38:17.109457] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.384 [2024-09-27 22:38:17.109511] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.384 [2024-09-27 22:38:17.109537] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:21.384 [2024-09-27 22:38:17.109548] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.384 [2024-09-27 22:38:17.111675] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.384 [2024-09-27 22:38:17.111711] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.384 pt2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 [2024-09-27 22:38:17.121492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:21.384 [2024-09-27 22:38:17.123642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:21.384 [2024-09-27 22:38:17.123829] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:21.384 [2024-09-27 22:38:17.123843] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:21.384 [2024-09-27 22:38:17.123927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:21.384 [2024-09-27 22:38:17.124086] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:21.384 [2024-09-27 22:38:17.124099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:21.384 [2024-09-27 22:38:17.124209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.384 "name": "raid_bdev1", 00:21:21.384 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:21.384 "strip_size_kb": 0, 00:21:21.384 "state": "online", 00:21:21.384 "raid_level": "raid1", 00:21:21.384 "superblock": true, 00:21:21.384 "num_base_bdevs": 2, 00:21:21.384 "num_base_bdevs_discovered": 2, 00:21:21.384 "num_base_bdevs_operational": 2, 00:21:21.384 "base_bdevs_list": [ 00:21:21.384 { 00:21:21.384 "name": "pt1", 00:21:21.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:21.384 "is_configured": true, 00:21:21.384 "data_offset": 256, 00:21:21.384 "data_size": 7936 00:21:21.384 }, 00:21:21.384 { 00:21:21.384 "name": "pt2", 00:21:21.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.384 "is_configured": true, 00:21:21.384 "data_offset": 256, 00:21:21.384 "data_size": 7936 00:21:21.384 } 00:21:21.384 ] 00:21:21.384 }' 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.384 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.955 [2024-09-27 22:38:17.557146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.955 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:21.955 "name": "raid_bdev1", 00:21:21.955 "aliases": [ 00:21:21.955 "9a1818f9-e70b-4795-8b50-7e4c21ab2379" 00:21:21.955 ], 00:21:21.955 "product_name": "Raid Volume", 00:21:21.955 "block_size": 4096, 00:21:21.955 "num_blocks": 7936, 00:21:21.955 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:21.955 "md_size": 32, 00:21:21.955 "md_interleave": false, 00:21:21.955 "dif_type": 0, 00:21:21.955 "assigned_rate_limits": { 00:21:21.955 "rw_ios_per_sec": 0, 00:21:21.955 "rw_mbytes_per_sec": 0, 00:21:21.955 "r_mbytes_per_sec": 0, 00:21:21.955 "w_mbytes_per_sec": 0 00:21:21.955 }, 00:21:21.955 "claimed": false, 00:21:21.955 "zoned": false, 00:21:21.955 "supported_io_types": { 00:21:21.955 "read": true, 00:21:21.955 "write": true, 00:21:21.955 "unmap": false, 00:21:21.955 "flush": false, 00:21:21.955 "reset": true, 00:21:21.955 "nvme_admin": false, 00:21:21.955 "nvme_io": false, 00:21:21.955 "nvme_io_md": false, 00:21:21.955 "write_zeroes": true, 00:21:21.955 "zcopy": false, 00:21:21.955 "get_zone_info": false, 00:21:21.955 "zone_management": false, 00:21:21.955 "zone_append": false, 00:21:21.955 "compare": false, 00:21:21.955 "compare_and_write": false, 00:21:21.955 "abort": false, 00:21:21.955 "seek_hole": false, 00:21:21.955 "seek_data": false, 00:21:21.955 "copy": false, 00:21:21.955 "nvme_iov_md": false 00:21:21.955 }, 00:21:21.955 "memory_domains": [ 00:21:21.955 { 00:21:21.955 "dma_device_id": "system", 00:21:21.955 "dma_device_type": 1 00:21:21.955 }, 00:21:21.955 { 00:21:21.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.955 "dma_device_type": 2 00:21:21.955 }, 00:21:21.955 { 00:21:21.955 "dma_device_id": "system", 00:21:21.955 "dma_device_type": 1 00:21:21.955 }, 00:21:21.956 { 00:21:21.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.956 "dma_device_type": 2 00:21:21.956 } 00:21:21.956 ], 00:21:21.956 "driver_specific": { 00:21:21.956 "raid": { 00:21:21.956 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:21.956 "strip_size_kb": 0, 00:21:21.956 "state": "online", 00:21:21.956 "raid_level": "raid1", 00:21:21.956 "superblock": true, 00:21:21.956 "num_base_bdevs": 2, 00:21:21.956 "num_base_bdevs_discovered": 2, 00:21:21.956 "num_base_bdevs_operational": 2, 00:21:21.956 "base_bdevs_list": [ 00:21:21.956 { 00:21:21.956 "name": "pt1", 00:21:21.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:21.956 "is_configured": true, 00:21:21.956 "data_offset": 256, 00:21:21.956 "data_size": 7936 00:21:21.956 }, 00:21:21.956 { 00:21:21.956 "name": "pt2", 00:21:21.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.956 "is_configured": true, 00:21:21.956 "data_offset": 256, 00:21:21.956 "data_size": 7936 00:21:21.956 } 00:21:21.956 ] 00:21:21.956 } 00:21:21.956 } 00:21:21.956 }' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:21.956 pt2' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.956 [2024-09-27 22:38:17.768779] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a1818f9-e70b-4795-8b50-7e4c21ab2379 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 9a1818f9-e70b-4795-8b50-7e4c21ab2379 ']' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.956 [2024-09-27 22:38:17.808451] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.956 [2024-09-27 22:38:17.808476] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.956 [2024-09-27 22:38:17.808566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.956 [2024-09-27 22:38:17.808628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.956 [2024-09-27 22:38:17.808642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.956 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 [2024-09-27 22:38:17.932303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:22.219 [2024-09-27 22:38:17.934417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:22.219 [2024-09-27 22:38:17.934492] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:22.219 [2024-09-27 22:38:17.934548] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:22.219 [2024-09-27 22:38:17.934566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.219 [2024-09-27 22:38:17.934579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:22.219 request: 00:21:22.219 { 00:21:22.219 "name": "raid_bdev1", 00:21:22.219 "raid_level": "raid1", 00:21:22.219 "base_bdevs": [ 00:21:22.219 "malloc1", 00:21:22.219 "malloc2" 00:21:22.219 ], 00:21:22.219 "superblock": false, 00:21:22.219 "method": "bdev_raid_create", 00:21:22.219 "req_id": 1 00:21:22.219 } 00:21:22.219 Got JSON-RPC error response 00:21:22.219 response: 00:21:22.219 { 00:21:22.219 "code": -17, 00:21:22.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:22.219 } 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:21:22.219 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.220 [2024-09-27 22:38:17.996182] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.220 [2024-09-27 22:38:17.996239] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.220 [2024-09-27 22:38:17.996258] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:22.220 [2024-09-27 22:38:17.996272] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.220 [2024-09-27 22:38:17.998506] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.220 [2024-09-27 22:38:17.998549] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.220 [2024-09-27 22:38:17.998600] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:22.220 [2024-09-27 22:38:17.998653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.220 pt1 00:21:22.220 22:38:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.220 "name": "raid_bdev1", 00:21:22.220 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:22.220 "strip_size_kb": 0, 00:21:22.220 "state": "configuring", 00:21:22.220 "raid_level": "raid1", 00:21:22.220 "superblock": true, 00:21:22.220 "num_base_bdevs": 2, 00:21:22.220 "num_base_bdevs_discovered": 1, 00:21:22.220 "num_base_bdevs_operational": 2, 00:21:22.220 "base_bdevs_list": [ 00:21:22.220 { 00:21:22.220 "name": "pt1", 00:21:22.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.220 "is_configured": true, 00:21:22.220 "data_offset": 256, 00:21:22.220 "data_size": 7936 00:21:22.220 }, 00:21:22.220 { 00:21:22.220 "name": null, 00:21:22.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.220 "is_configured": false, 00:21:22.220 "data_offset": 256, 00:21:22.220 "data_size": 7936 00:21:22.220 } 00:21:22.220 ] 00:21:22.220 }' 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.220 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.786 [2024-09-27 22:38:18.404116] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:22.786 [2024-09-27 22:38:18.404191] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.786 [2024-09-27 22:38:18.404214] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:22.786 [2024-09-27 22:38:18.404228] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.786 [2024-09-27 22:38:18.404467] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.786 [2024-09-27 22:38:18.404487] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:22.786 [2024-09-27 22:38:18.404537] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:22.786 [2024-09-27 22:38:18.404562] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.786 [2024-09-27 22:38:18.404688] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:22.786 [2024-09-27 22:38:18.404701] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:22.786 [2024-09-27 22:38:18.404770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:22.786 [2024-09-27 22:38:18.404888] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:22.786 [2024-09-27 22:38:18.404897] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:22.786 [2024-09-27 22:38:18.405011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.786 pt2 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.786 "name": "raid_bdev1", 00:21:22.786 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:22.786 "strip_size_kb": 0, 00:21:22.786 "state": "online", 00:21:22.786 "raid_level": "raid1", 00:21:22.786 "superblock": true, 00:21:22.786 "num_base_bdevs": 2, 00:21:22.786 "num_base_bdevs_discovered": 2, 00:21:22.786 "num_base_bdevs_operational": 2, 00:21:22.786 "base_bdevs_list": [ 00:21:22.786 { 00:21:22.786 "name": "pt1", 00:21:22.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.786 "is_configured": true, 00:21:22.786 "data_offset": 256, 00:21:22.786 "data_size": 7936 00:21:22.786 }, 00:21:22.786 { 00:21:22.786 "name": "pt2", 00:21:22.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.786 "is_configured": true, 00:21:22.786 "data_offset": 256, 00:21:22.786 "data_size": 7936 00:21:22.786 } 00:21:22.786 ] 00:21:22.786 }' 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.786 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.045 [2024-09-27 22:38:18.855723] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.045 "name": "raid_bdev1", 00:21:23.045 "aliases": [ 00:21:23.045 "9a1818f9-e70b-4795-8b50-7e4c21ab2379" 00:21:23.045 ], 00:21:23.045 "product_name": "Raid Volume", 00:21:23.045 "block_size": 4096, 00:21:23.045 "num_blocks": 7936, 00:21:23.045 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:23.045 "md_size": 32, 00:21:23.045 "md_interleave": false, 00:21:23.045 "dif_type": 0, 00:21:23.045 "assigned_rate_limits": { 00:21:23.045 "rw_ios_per_sec": 0, 00:21:23.045 "rw_mbytes_per_sec": 0, 00:21:23.045 "r_mbytes_per_sec": 0, 00:21:23.045 "w_mbytes_per_sec": 0 00:21:23.045 }, 00:21:23.045 "claimed": false, 00:21:23.045 "zoned": false, 00:21:23.045 "supported_io_types": { 00:21:23.045 "read": true, 00:21:23.045 "write": true, 00:21:23.045 "unmap": false, 00:21:23.045 "flush": false, 00:21:23.045 "reset": true, 00:21:23.045 "nvme_admin": false, 00:21:23.045 "nvme_io": false, 00:21:23.045 "nvme_io_md": false, 00:21:23.045 "write_zeroes": true, 00:21:23.045 "zcopy": false, 00:21:23.045 "get_zone_info": false, 00:21:23.045 "zone_management": false, 00:21:23.045 "zone_append": false, 00:21:23.045 "compare": false, 00:21:23.045 "compare_and_write": false, 00:21:23.045 "abort": false, 00:21:23.045 "seek_hole": false, 00:21:23.045 "seek_data": false, 00:21:23.045 "copy": false, 00:21:23.045 "nvme_iov_md": false 00:21:23.045 }, 00:21:23.045 "memory_domains": [ 00:21:23.045 { 00:21:23.045 "dma_device_id": "system", 00:21:23.045 "dma_device_type": 1 00:21:23.045 }, 00:21:23.045 { 00:21:23.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.045 "dma_device_type": 2 00:21:23.045 }, 00:21:23.045 { 00:21:23.045 "dma_device_id": "system", 00:21:23.045 "dma_device_type": 1 00:21:23.045 }, 00:21:23.045 { 00:21:23.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.045 "dma_device_type": 2 00:21:23.045 } 00:21:23.045 ], 00:21:23.045 "driver_specific": { 00:21:23.045 "raid": { 00:21:23.045 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:23.045 "strip_size_kb": 0, 00:21:23.045 "state": "online", 00:21:23.045 "raid_level": "raid1", 00:21:23.045 "superblock": true, 00:21:23.045 "num_base_bdevs": 2, 00:21:23.045 "num_base_bdevs_discovered": 2, 00:21:23.045 "num_base_bdevs_operational": 2, 00:21:23.045 "base_bdevs_list": [ 00:21:23.045 { 00:21:23.045 "name": "pt1", 00:21:23.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.045 "is_configured": true, 00:21:23.045 "data_offset": 256, 00:21:23.045 "data_size": 7936 00:21:23.045 }, 00:21:23.045 { 00:21:23.045 "name": "pt2", 00:21:23.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.045 "is_configured": true, 00:21:23.045 "data_offset": 256, 00:21:23.045 "data_size": 7936 00:21:23.045 } 00:21:23.045 ] 00:21:23.045 } 00:21:23.045 } 00:21:23.045 }' 00:21:23.045 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:23.305 pt2' 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.305 22:38:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:23.305 [2024-09-27 22:38:19.075390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 9a1818f9-e70b-4795-8b50-7e4c21ab2379 '!=' 9a1818f9-e70b-4795-8b50-7e4c21ab2379 ']' 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.305 [2024-09-27 22:38:19.119137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.305 "name": "raid_bdev1", 00:21:23.305 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:23.305 "strip_size_kb": 0, 00:21:23.305 "state": "online", 00:21:23.305 "raid_level": "raid1", 00:21:23.305 "superblock": true, 00:21:23.305 "num_base_bdevs": 2, 00:21:23.305 "num_base_bdevs_discovered": 1, 00:21:23.305 "num_base_bdevs_operational": 1, 00:21:23.305 "base_bdevs_list": [ 00:21:23.305 { 00:21:23.305 "name": null, 00:21:23.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.305 "is_configured": false, 00:21:23.305 "data_offset": 0, 00:21:23.305 "data_size": 7936 00:21:23.305 }, 00:21:23.305 { 00:21:23.305 "name": "pt2", 00:21:23.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.305 "is_configured": true, 00:21:23.305 "data_offset": 256, 00:21:23.305 "data_size": 7936 00:21:23.305 } 00:21:23.305 ] 00:21:23.305 }' 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.305 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.875 [2024-09-27 22:38:19.535049] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.875 [2024-09-27 22:38:19.535078] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.875 [2024-09-27 22:38:19.535155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.875 [2024-09-27 22:38:19.535203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.875 [2024-09-27 22:38:19.535217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.875 [2024-09-27 22:38:19.602928] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:23.875 [2024-09-27 22:38:19.603125] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.875 [2024-09-27 22:38:19.603178] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:23.875 [2024-09-27 22:38:19.603283] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.875 [2024-09-27 22:38:19.605592] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.875 [2024-09-27 22:38:19.605732] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:23.875 [2024-09-27 22:38:19.605853] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:23.875 [2024-09-27 22:38:19.605937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:23.875 [2024-09-27 22:38:19.606078] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:23.875 [2024-09-27 22:38:19.606187] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:23.875 [2024-09-27 22:38:19.606304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:23.875 [2024-09-27 22:38:19.606457] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:23.875 [2024-09-27 22:38:19.606602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:23.875 [2024-09-27 22:38:19.606760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.875 pt2 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.875 "name": "raid_bdev1", 00:21:23.875 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:23.875 "strip_size_kb": 0, 00:21:23.875 "state": "online", 00:21:23.875 "raid_level": "raid1", 00:21:23.875 "superblock": true, 00:21:23.875 "num_base_bdevs": 2, 00:21:23.875 "num_base_bdevs_discovered": 1, 00:21:23.875 "num_base_bdevs_operational": 1, 00:21:23.875 "base_bdevs_list": [ 00:21:23.875 { 00:21:23.875 "name": null, 00:21:23.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.875 "is_configured": false, 00:21:23.875 "data_offset": 256, 00:21:23.875 "data_size": 7936 00:21:23.875 }, 00:21:23.875 { 00:21:23.875 "name": "pt2", 00:21:23.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.875 "is_configured": true, 00:21:23.875 "data_offset": 256, 00:21:23.875 "data_size": 7936 00:21:23.875 } 00:21:23.875 ] 00:21:23.875 }' 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.875 22:38:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 [2024-09-27 22:38:20.042405] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.445 [2024-09-27 22:38:20.042554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.445 [2024-09-27 22:38:20.042639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.445 [2024-09-27 22:38:20.042690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.445 [2024-09-27 22:38:20.042702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 [2024-09-27 22:38:20.098350] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:24.445 [2024-09-27 22:38:20.098410] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.445 [2024-09-27 22:38:20.098433] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:24.445 [2024-09-27 22:38:20.098444] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.445 [2024-09-27 22:38:20.100757] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.445 [2024-09-27 22:38:20.100800] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:24.445 [2024-09-27 22:38:20.100861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:24.445 [2024-09-27 22:38:20.100906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:24.445 [2024-09-27 22:38:20.101054] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:24.445 [2024-09-27 22:38:20.101067] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.445 [2024-09-27 22:38:20.101091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:24.445 [2024-09-27 22:38:20.101177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.445 [2024-09-27 22:38:20.101245] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:24.445 [2024-09-27 22:38:20.101254] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:24.445 [2024-09-27 22:38:20.101323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:24.445 [2024-09-27 22:38:20.101434] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:24.445 [2024-09-27 22:38:20.101447] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:24.445 [2024-09-27 22:38:20.101549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.445 pt1 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.445 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.445 "name": "raid_bdev1", 00:21:24.445 "uuid": "9a1818f9-e70b-4795-8b50-7e4c21ab2379", 00:21:24.445 "strip_size_kb": 0, 00:21:24.445 "state": "online", 00:21:24.445 "raid_level": "raid1", 00:21:24.445 "superblock": true, 00:21:24.445 "num_base_bdevs": 2, 00:21:24.445 "num_base_bdevs_discovered": 1, 00:21:24.445 "num_base_bdevs_operational": 1, 00:21:24.445 "base_bdevs_list": [ 00:21:24.445 { 00:21:24.445 "name": null, 00:21:24.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.446 "is_configured": false, 00:21:24.446 "data_offset": 256, 00:21:24.446 "data_size": 7936 00:21:24.446 }, 00:21:24.446 { 00:21:24.446 "name": "pt2", 00:21:24.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.446 "is_configured": true, 00:21:24.446 "data_offset": 256, 00:21:24.446 "data_size": 7936 00:21:24.446 } 00:21:24.446 ] 00:21:24.446 }' 00:21:24.446 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.446 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:24.704 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.704 [2024-09-27 22:38:20.573903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 9a1818f9-e70b-4795-8b50-7e4c21ab2379 '!=' 9a1818f9-e70b-4795-8b50-7e4c21ab2379 ']' 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88627 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88627 ']' 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 88627 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88627 00:21:24.963 killing process with pid 88627 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88627' 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 88627 00:21:24.963 [2024-09-27 22:38:20.638148] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:24.963 [2024-09-27 22:38:20.638224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.963 [2024-09-27 22:38:20.638266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.963 [2024-09-27 22:38:20.638282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:24.963 22:38:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 88627 00:21:25.221 [2024-09-27 22:38:20.855309] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.125 22:38:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:27.125 00:21:27.125 real 0m6.903s 00:21:27.125 user 0m9.822s 00:21:27.125 sys 0m1.258s 00:21:27.125 ************************************ 00:21:27.125 END TEST raid_superblock_test_md_separate 00:21:27.125 ************************************ 00:21:27.125 22:38:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:27.125 22:38:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.125 22:38:22 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:27.125 22:38:22 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:27.125 22:38:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:27.125 22:38:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:27.125 22:38:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.125 ************************************ 00:21:27.125 START TEST raid_rebuild_test_sb_md_separate 00:21:27.125 ************************************ 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88957 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.125 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88957 00:21:27.126 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88957 ']' 00:21:27.126 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.126 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.126 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.126 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.126 22:38:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.126 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.126 Zero copy mechanism will not be used. 00:21:27.126 [2024-09-27 22:38:22.910571] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:21:27.126 [2024-09-27 22:38:22.910711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88957 ] 00:21:27.384 [2024-09-27 22:38:23.081692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.642 [2024-09-27 22:38:23.291846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.902 [2024-09-27 22:38:23.527891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.902 [2024-09-27 22:38:23.527928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.161 22:38:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.161 22:38:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:21:28.161 22:38:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.161 22:38:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:28.161 22:38:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.161 22:38:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.161 BaseBdev1_malloc 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.161 [2024-09-27 22:38:24.026835] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.161 [2024-09-27 22:38:24.026903] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.161 [2024-09-27 22:38:24.026948] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:28.161 [2024-09-27 22:38:24.026962] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.161 [2024-09-27 22:38:24.029110] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.161 [2024-09-27 22:38:24.029152] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.161 BaseBdev1 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.161 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.420 BaseBdev2_malloc 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.420 [2024-09-27 22:38:24.089384] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:28.420 [2024-09-27 22:38:24.089444] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.420 [2024-09-27 22:38:24.089465] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:28.420 [2024-09-27 22:38:24.089479] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.420 [2024-09-27 22:38:24.091601] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.420 [2024-09-27 22:38:24.091641] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.420 BaseBdev2 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.420 spare_malloc 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.420 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.421 spare_delay 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.421 [2024-09-27 22:38:24.163178] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.421 [2024-09-27 22:38:24.163238] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.421 [2024-09-27 22:38:24.163260] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:28.421 [2024-09-27 22:38:24.163274] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.421 [2024-09-27 22:38:24.165429] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.421 [2024-09-27 22:38:24.165473] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.421 spare 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.421 [2024-09-27 22:38:24.175224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.421 [2024-09-27 22:38:24.177294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.421 [2024-09-27 22:38:24.177480] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:28.421 [2024-09-27 22:38:24.177497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:28.421 [2024-09-27 22:38:24.177566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:28.421 [2024-09-27 22:38:24.177689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:28.421 [2024-09-27 22:38:24.177698] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:28.421 [2024-09-27 22:38:24.177796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.421 "name": "raid_bdev1", 00:21:28.421 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:28.421 "strip_size_kb": 0, 00:21:28.421 "state": "online", 00:21:28.421 "raid_level": "raid1", 00:21:28.421 "superblock": true, 00:21:28.421 "num_base_bdevs": 2, 00:21:28.421 "num_base_bdevs_discovered": 2, 00:21:28.421 "num_base_bdevs_operational": 2, 00:21:28.421 "base_bdevs_list": [ 00:21:28.421 { 00:21:28.421 "name": "BaseBdev1", 00:21:28.421 "uuid": "41016ec6-8554-58f4-b095-5fc6b280feb6", 00:21:28.421 "is_configured": true, 00:21:28.421 "data_offset": 256, 00:21:28.421 "data_size": 7936 00:21:28.421 }, 00:21:28.421 { 00:21:28.421 "name": "BaseBdev2", 00:21:28.421 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:28.421 "is_configured": true, 00:21:28.421 "data_offset": 256, 00:21:28.421 "data_size": 7936 00:21:28.421 } 00:21:28.421 ] 00:21:28.421 }' 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.421 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.990 [2024-09-27 22:38:24.570913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:28.990 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:28.991 [2024-09-27 22:38:24.826365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:28.991 /dev/nbd0 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:28.991 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.250 1+0 records in 00:21:29.250 1+0 records out 00:21:29.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341174 s, 12.0 MB/s 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:29.250 22:38:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:29.819 7936+0 records in 00:21:29.819 7936+0 records out 00:21:29.819 32505856 bytes (33 MB, 31 MiB) copied, 0.695872 s, 46.7 MB/s 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.819 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:30.077 [2024-09-27 22:38:25.840576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.077 [2024-09-27 22:38:25.856635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.077 "name": "raid_bdev1", 00:21:30.077 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:30.077 "strip_size_kb": 0, 00:21:30.077 "state": "online", 00:21:30.077 "raid_level": "raid1", 00:21:30.077 "superblock": true, 00:21:30.077 "num_base_bdevs": 2, 00:21:30.077 "num_base_bdevs_discovered": 1, 00:21:30.077 "num_base_bdevs_operational": 1, 00:21:30.077 "base_bdevs_list": [ 00:21:30.077 { 00:21:30.077 "name": null, 00:21:30.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.077 "is_configured": false, 00:21:30.077 "data_offset": 0, 00:21:30.077 "data_size": 7936 00:21:30.077 }, 00:21:30.077 { 00:21:30.077 "name": "BaseBdev2", 00:21:30.077 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:30.077 "is_configured": true, 00:21:30.077 "data_offset": 256, 00:21:30.077 "data_size": 7936 00:21:30.077 } 00:21:30.077 ] 00:21:30.077 }' 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.077 22:38:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.645 22:38:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.645 22:38:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.645 22:38:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.645 [2024-09-27 22:38:26.244135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.645 [2024-09-27 22:38:26.262053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:30.645 22:38:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.645 22:38:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:30.645 [2024-09-27 22:38:26.264125] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.583 "name": "raid_bdev1", 00:21:31.583 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:31.583 "strip_size_kb": 0, 00:21:31.583 "state": "online", 00:21:31.583 "raid_level": "raid1", 00:21:31.583 "superblock": true, 00:21:31.583 "num_base_bdevs": 2, 00:21:31.583 "num_base_bdevs_discovered": 2, 00:21:31.583 "num_base_bdevs_operational": 2, 00:21:31.583 "process": { 00:21:31.583 "type": "rebuild", 00:21:31.583 "target": "spare", 00:21:31.583 "progress": { 00:21:31.583 "blocks": 2560, 00:21:31.583 "percent": 32 00:21:31.583 } 00:21:31.583 }, 00:21:31.583 "base_bdevs_list": [ 00:21:31.583 { 00:21:31.583 "name": "spare", 00:21:31.583 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:31.583 "is_configured": true, 00:21:31.583 "data_offset": 256, 00:21:31.583 "data_size": 7936 00:21:31.583 }, 00:21:31.583 { 00:21:31.583 "name": "BaseBdev2", 00:21:31.583 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:31.583 "is_configured": true, 00:21:31.583 "data_offset": 256, 00:21:31.583 "data_size": 7936 00:21:31.583 } 00:21:31.583 ] 00:21:31.583 }' 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.583 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.583 [2024-09-27 22:38:27.392272] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.843 [2024-09-27 22:38:27.468634] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:31.843 [2024-09-27 22:38:27.468697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.843 [2024-09-27 22:38:27.468713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.843 [2024-09-27 22:38:27.468730] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.843 "name": "raid_bdev1", 00:21:31.843 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:31.843 "strip_size_kb": 0, 00:21:31.843 "state": "online", 00:21:31.843 "raid_level": "raid1", 00:21:31.843 "superblock": true, 00:21:31.843 "num_base_bdevs": 2, 00:21:31.843 "num_base_bdevs_discovered": 1, 00:21:31.843 "num_base_bdevs_operational": 1, 00:21:31.843 "base_bdevs_list": [ 00:21:31.843 { 00:21:31.843 "name": null, 00:21:31.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.843 "is_configured": false, 00:21:31.843 "data_offset": 0, 00:21:31.843 "data_size": 7936 00:21:31.843 }, 00:21:31.843 { 00:21:31.843 "name": "BaseBdev2", 00:21:31.843 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:31.843 "is_configured": true, 00:21:31.843 "data_offset": 256, 00:21:31.843 "data_size": 7936 00:21:31.843 } 00:21:31.843 ] 00:21:31.843 }' 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.843 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.102 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.102 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.102 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.102 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.102 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.103 "name": "raid_bdev1", 00:21:32.103 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:32.103 "strip_size_kb": 0, 00:21:32.103 "state": "online", 00:21:32.103 "raid_level": "raid1", 00:21:32.103 "superblock": true, 00:21:32.103 "num_base_bdevs": 2, 00:21:32.103 "num_base_bdevs_discovered": 1, 00:21:32.103 "num_base_bdevs_operational": 1, 00:21:32.103 "base_bdevs_list": [ 00:21:32.103 { 00:21:32.103 "name": null, 00:21:32.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.103 "is_configured": false, 00:21:32.103 "data_offset": 0, 00:21:32.103 "data_size": 7936 00:21:32.103 }, 00:21:32.103 { 00:21:32.103 "name": "BaseBdev2", 00:21:32.103 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:32.103 "is_configured": true, 00:21:32.103 "data_offset": 256, 00:21:32.103 "data_size": 7936 00:21:32.103 } 00:21:32.103 ] 00:21:32.103 }' 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.103 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.362 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.362 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:32.362 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.362 22:38:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.362 [2024-09-27 22:38:28.002600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.362 [2024-09-27 22:38:28.018278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:32.362 22:38:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.362 22:38:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:32.362 [2024-09-27 22:38:28.020383] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.300 "name": "raid_bdev1", 00:21:33.300 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:33.300 "strip_size_kb": 0, 00:21:33.300 "state": "online", 00:21:33.300 "raid_level": "raid1", 00:21:33.300 "superblock": true, 00:21:33.300 "num_base_bdevs": 2, 00:21:33.300 "num_base_bdevs_discovered": 2, 00:21:33.300 "num_base_bdevs_operational": 2, 00:21:33.300 "process": { 00:21:33.300 "type": "rebuild", 00:21:33.300 "target": "spare", 00:21:33.300 "progress": { 00:21:33.300 "blocks": 2560, 00:21:33.300 "percent": 32 00:21:33.300 } 00:21:33.300 }, 00:21:33.300 "base_bdevs_list": [ 00:21:33.300 { 00:21:33.300 "name": "spare", 00:21:33.300 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:33.300 "is_configured": true, 00:21:33.300 "data_offset": 256, 00:21:33.300 "data_size": 7936 00:21:33.300 }, 00:21:33.300 { 00:21:33.300 "name": "BaseBdev2", 00:21:33.300 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:33.300 "is_configured": true, 00:21:33.300 "data_offset": 256, 00:21:33.300 "data_size": 7936 00:21:33.300 } 00:21:33.300 ] 00:21:33.300 }' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:33.300 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=813 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.300 "name": "raid_bdev1", 00:21:33.300 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:33.300 "strip_size_kb": 0, 00:21:33.300 "state": "online", 00:21:33.300 "raid_level": "raid1", 00:21:33.300 "superblock": true, 00:21:33.300 "num_base_bdevs": 2, 00:21:33.300 "num_base_bdevs_discovered": 2, 00:21:33.300 "num_base_bdevs_operational": 2, 00:21:33.300 "process": { 00:21:33.300 "type": "rebuild", 00:21:33.300 "target": "spare", 00:21:33.300 "progress": { 00:21:33.300 "blocks": 2816, 00:21:33.300 "percent": 35 00:21:33.300 } 00:21:33.300 }, 00:21:33.300 "base_bdevs_list": [ 00:21:33.300 { 00:21:33.300 "name": "spare", 00:21:33.300 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:33.300 "is_configured": true, 00:21:33.300 "data_offset": 256, 00:21:33.300 "data_size": 7936 00:21:33.300 }, 00:21:33.300 { 00:21:33.300 "name": "BaseBdev2", 00:21:33.300 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:33.300 "is_configured": true, 00:21:33.300 "data_offset": 256, 00:21:33.300 "data_size": 7936 00:21:33.300 } 00:21:33.300 ] 00:21:33.300 }' 00:21:33.300 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.560 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.560 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.560 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.560 22:38:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.495 "name": "raid_bdev1", 00:21:34.495 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:34.495 "strip_size_kb": 0, 00:21:34.495 "state": "online", 00:21:34.495 "raid_level": "raid1", 00:21:34.495 "superblock": true, 00:21:34.495 "num_base_bdevs": 2, 00:21:34.495 "num_base_bdevs_discovered": 2, 00:21:34.495 "num_base_bdevs_operational": 2, 00:21:34.495 "process": { 00:21:34.495 "type": "rebuild", 00:21:34.495 "target": "spare", 00:21:34.495 "progress": { 00:21:34.495 "blocks": 5632, 00:21:34.495 "percent": 70 00:21:34.495 } 00:21:34.495 }, 00:21:34.495 "base_bdevs_list": [ 00:21:34.495 { 00:21:34.495 "name": "spare", 00:21:34.495 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:34.495 "is_configured": true, 00:21:34.495 "data_offset": 256, 00:21:34.495 "data_size": 7936 00:21:34.495 }, 00:21:34.495 { 00:21:34.495 "name": "BaseBdev2", 00:21:34.495 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:34.495 "is_configured": true, 00:21:34.495 "data_offset": 256, 00:21:34.495 "data_size": 7936 00:21:34.495 } 00:21:34.495 ] 00:21:34.495 }' 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:34.495 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.754 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:34.754 22:38:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:35.321 [2024-09-27 22:38:31.131784] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:35.321 [2024-09-27 22:38:31.131872] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:35.321 [2024-09-27 22:38:31.131966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.580 "name": "raid_bdev1", 00:21:35.580 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:35.580 "strip_size_kb": 0, 00:21:35.580 "state": "online", 00:21:35.580 "raid_level": "raid1", 00:21:35.580 "superblock": true, 00:21:35.580 "num_base_bdevs": 2, 00:21:35.580 "num_base_bdevs_discovered": 2, 00:21:35.580 "num_base_bdevs_operational": 2, 00:21:35.580 "base_bdevs_list": [ 00:21:35.580 { 00:21:35.580 "name": "spare", 00:21:35.580 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:35.580 "is_configured": true, 00:21:35.580 "data_offset": 256, 00:21:35.580 "data_size": 7936 00:21:35.580 }, 00:21:35.580 { 00:21:35.580 "name": "BaseBdev2", 00:21:35.580 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:35.580 "is_configured": true, 00:21:35.580 "data_offset": 256, 00:21:35.580 "data_size": 7936 00:21:35.580 } 00:21:35.580 ] 00:21:35.580 }' 00:21:35.580 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.839 "name": "raid_bdev1", 00:21:35.839 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:35.839 "strip_size_kb": 0, 00:21:35.839 "state": "online", 00:21:35.839 "raid_level": "raid1", 00:21:35.839 "superblock": true, 00:21:35.839 "num_base_bdevs": 2, 00:21:35.839 "num_base_bdevs_discovered": 2, 00:21:35.839 "num_base_bdevs_operational": 2, 00:21:35.839 "base_bdevs_list": [ 00:21:35.839 { 00:21:35.839 "name": "spare", 00:21:35.839 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:35.839 "is_configured": true, 00:21:35.839 "data_offset": 256, 00:21:35.839 "data_size": 7936 00:21:35.839 }, 00:21:35.839 { 00:21:35.839 "name": "BaseBdev2", 00:21:35.839 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:35.839 "is_configured": true, 00:21:35.839 "data_offset": 256, 00:21:35.839 "data_size": 7936 00:21:35.839 } 00:21:35.839 ] 00:21:35.839 }' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.839 "name": "raid_bdev1", 00:21:35.839 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:35.839 "strip_size_kb": 0, 00:21:35.839 "state": "online", 00:21:35.839 "raid_level": "raid1", 00:21:35.839 "superblock": true, 00:21:35.839 "num_base_bdevs": 2, 00:21:35.839 "num_base_bdevs_discovered": 2, 00:21:35.839 "num_base_bdevs_operational": 2, 00:21:35.839 "base_bdevs_list": [ 00:21:35.839 { 00:21:35.839 "name": "spare", 00:21:35.839 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:35.839 "is_configured": true, 00:21:35.839 "data_offset": 256, 00:21:35.839 "data_size": 7936 00:21:35.839 }, 00:21:35.839 { 00:21:35.839 "name": "BaseBdev2", 00:21:35.839 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:35.839 "is_configured": true, 00:21:35.839 "data_offset": 256, 00:21:35.839 "data_size": 7936 00:21:35.839 } 00:21:35.839 ] 00:21:35.839 }' 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.839 22:38:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.407 [2024-09-27 22:38:32.018119] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.407 [2024-09-27 22:38:32.018161] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.407 [2024-09-27 22:38:32.018243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.407 [2024-09-27 22:38:32.018316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.407 [2024-09-27 22:38:32.018330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.407 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:36.407 /dev/nbd0 00:21:36.665 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.665 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.665 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:36.665 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:21:36.665 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.666 1+0 records in 00:21:36.666 1+0 records out 00:21:36.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045139 s, 9.1 MB/s 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.666 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:36.666 /dev/nbd1 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.924 1+0 records in 00:21:36.924 1+0 records out 00:21:36.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397603 s, 10.3 MB/s 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:36.924 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.183 22:38:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.442 [2024-09-27 22:38:33.200558] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:37.442 [2024-09-27 22:38:33.200616] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.442 [2024-09-27 22:38:33.200642] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:37.442 [2024-09-27 22:38:33.200654] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.442 [2024-09-27 22:38:33.202887] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.442 [2024-09-27 22:38:33.202926] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:37.442 [2024-09-27 22:38:33.203005] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:37.442 [2024-09-27 22:38:33.203058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:37.442 [2024-09-27 22:38:33.203198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.442 spare 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.442 [2024-09-27 22:38:33.303114] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:37.442 [2024-09-27 22:38:33.303148] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:37.442 [2024-09-27 22:38:33.303260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:37.442 [2024-09-27 22:38:33.303404] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:37.442 [2024-09-27 22:38:33.303414] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:37.442 [2024-09-27 22:38:33.303537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.442 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.701 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.701 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.701 "name": "raid_bdev1", 00:21:37.701 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:37.701 "strip_size_kb": 0, 00:21:37.701 "state": "online", 00:21:37.701 "raid_level": "raid1", 00:21:37.701 "superblock": true, 00:21:37.701 "num_base_bdevs": 2, 00:21:37.701 "num_base_bdevs_discovered": 2, 00:21:37.701 "num_base_bdevs_operational": 2, 00:21:37.701 "base_bdevs_list": [ 00:21:37.701 { 00:21:37.701 "name": "spare", 00:21:37.701 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:37.701 "is_configured": true, 00:21:37.701 "data_offset": 256, 00:21:37.701 "data_size": 7936 00:21:37.701 }, 00:21:37.701 { 00:21:37.701 "name": "BaseBdev2", 00:21:37.701 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:37.701 "is_configured": true, 00:21:37.701 "data_offset": 256, 00:21:37.701 "data_size": 7936 00:21:37.701 } 00:21:37.701 ] 00:21:37.701 }' 00:21:37.701 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.701 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.959 "name": "raid_bdev1", 00:21:37.959 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:37.959 "strip_size_kb": 0, 00:21:37.959 "state": "online", 00:21:37.959 "raid_level": "raid1", 00:21:37.959 "superblock": true, 00:21:37.959 "num_base_bdevs": 2, 00:21:37.959 "num_base_bdevs_discovered": 2, 00:21:37.959 "num_base_bdevs_operational": 2, 00:21:37.959 "base_bdevs_list": [ 00:21:37.959 { 00:21:37.959 "name": "spare", 00:21:37.959 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:37.959 "is_configured": true, 00:21:37.959 "data_offset": 256, 00:21:37.959 "data_size": 7936 00:21:37.959 }, 00:21:37.959 { 00:21:37.959 "name": "BaseBdev2", 00:21:37.959 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:37.959 "is_configured": true, 00:21:37.959 "data_offset": 256, 00:21:37.959 "data_size": 7936 00:21:37.959 } 00:21:37.959 ] 00:21:37.959 }' 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.959 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.238 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.238 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.238 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:38.238 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.238 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.238 [2024-09-27 22:38:33.880125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:38.238 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.239 "name": "raid_bdev1", 00:21:38.239 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:38.239 "strip_size_kb": 0, 00:21:38.239 "state": "online", 00:21:38.239 "raid_level": "raid1", 00:21:38.239 "superblock": true, 00:21:38.239 "num_base_bdevs": 2, 00:21:38.239 "num_base_bdevs_discovered": 1, 00:21:38.239 "num_base_bdevs_operational": 1, 00:21:38.239 "base_bdevs_list": [ 00:21:38.239 { 00:21:38.239 "name": null, 00:21:38.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.239 "is_configured": false, 00:21:38.239 "data_offset": 0, 00:21:38.239 "data_size": 7936 00:21:38.239 }, 00:21:38.239 { 00:21:38.239 "name": "BaseBdev2", 00:21:38.239 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:38.239 "is_configured": true, 00:21:38.239 "data_offset": 256, 00:21:38.239 "data_size": 7936 00:21:38.239 } 00:21:38.239 ] 00:21:38.239 }' 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.239 22:38:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.497 22:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:38.497 22:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.497 22:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:38.497 [2024-09-27 22:38:34.272144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:38.497 [2024-09-27 22:38:34.272341] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:38.497 [2024-09-27 22:38:34.272361] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:38.497 [2024-09-27 22:38:34.272405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:38.497 [2024-09-27 22:38:34.288777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:38.497 22:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.497 22:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:38.497 [2024-09-27 22:38:34.290891] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.436 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.695 "name": "raid_bdev1", 00:21:39.695 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:39.695 "strip_size_kb": 0, 00:21:39.695 "state": "online", 00:21:39.695 "raid_level": "raid1", 00:21:39.695 "superblock": true, 00:21:39.695 "num_base_bdevs": 2, 00:21:39.695 "num_base_bdevs_discovered": 2, 00:21:39.695 "num_base_bdevs_operational": 2, 00:21:39.695 "process": { 00:21:39.695 "type": "rebuild", 00:21:39.695 "target": "spare", 00:21:39.695 "progress": { 00:21:39.695 "blocks": 2560, 00:21:39.695 "percent": 32 00:21:39.695 } 00:21:39.695 }, 00:21:39.695 "base_bdevs_list": [ 00:21:39.695 { 00:21:39.695 "name": "spare", 00:21:39.695 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:39.695 "is_configured": true, 00:21:39.695 "data_offset": 256, 00:21:39.695 "data_size": 7936 00:21:39.695 }, 00:21:39.695 { 00:21:39.695 "name": "BaseBdev2", 00:21:39.695 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:39.695 "is_configured": true, 00:21:39.695 "data_offset": 256, 00:21:39.695 "data_size": 7936 00:21:39.695 } 00:21:39.695 ] 00:21:39.695 }' 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.695 [2024-09-27 22:38:35.427280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:39.695 [2024-09-27 22:38:35.495592] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:39.695 [2024-09-27 22:38:35.495666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.695 [2024-09-27 22:38:35.495682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:39.695 [2024-09-27 22:38:35.495692] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:39.695 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.696 "name": "raid_bdev1", 00:21:39.696 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:39.696 "strip_size_kb": 0, 00:21:39.696 "state": "online", 00:21:39.696 "raid_level": "raid1", 00:21:39.696 "superblock": true, 00:21:39.696 "num_base_bdevs": 2, 00:21:39.696 "num_base_bdevs_discovered": 1, 00:21:39.696 "num_base_bdevs_operational": 1, 00:21:39.696 "base_bdevs_list": [ 00:21:39.696 { 00:21:39.696 "name": null, 00:21:39.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.696 "is_configured": false, 00:21:39.696 "data_offset": 0, 00:21:39.696 "data_size": 7936 00:21:39.696 }, 00:21:39.696 { 00:21:39.696 "name": "BaseBdev2", 00:21:39.696 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:39.696 "is_configured": true, 00:21:39.696 "data_offset": 256, 00:21:39.696 "data_size": 7936 00:21:39.696 } 00:21:39.696 ] 00:21:39.696 }' 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.696 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.263 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:40.263 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.263 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:40.263 [2024-09-27 22:38:35.890246] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:40.263 [2024-09-27 22:38:35.890309] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.263 [2024-09-27 22:38:35.890336] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:40.263 [2024-09-27 22:38:35.890350] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.263 [2024-09-27 22:38:35.890599] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.263 [2024-09-27 22:38:35.890619] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:40.263 [2024-09-27 22:38:35.890677] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:40.263 [2024-09-27 22:38:35.890695] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:40.263 [2024-09-27 22:38:35.890710] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:40.263 [2024-09-27 22:38:35.890739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:40.263 [2024-09-27 22:38:35.908122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:40.263 spare 00:21:40.263 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.263 22:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:40.263 [2024-09-27 22:38:35.910200] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.200 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.200 "name": "raid_bdev1", 00:21:41.200 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:41.200 "strip_size_kb": 0, 00:21:41.200 "state": "online", 00:21:41.200 "raid_level": "raid1", 00:21:41.200 "superblock": true, 00:21:41.200 "num_base_bdevs": 2, 00:21:41.200 "num_base_bdevs_discovered": 2, 00:21:41.200 "num_base_bdevs_operational": 2, 00:21:41.200 "process": { 00:21:41.200 "type": "rebuild", 00:21:41.200 "target": "spare", 00:21:41.200 "progress": { 00:21:41.200 "blocks": 2560, 00:21:41.200 "percent": 32 00:21:41.200 } 00:21:41.200 }, 00:21:41.200 "base_bdevs_list": [ 00:21:41.200 { 00:21:41.200 "name": "spare", 00:21:41.200 "uuid": "7d76cde6-f019-5da8-a487-2982942e845f", 00:21:41.200 "is_configured": true, 00:21:41.200 "data_offset": 256, 00:21:41.200 "data_size": 7936 00:21:41.200 }, 00:21:41.200 { 00:21:41.200 "name": "BaseBdev2", 00:21:41.201 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:41.201 "is_configured": true, 00:21:41.201 "data_offset": 256, 00:21:41.201 "data_size": 7936 00:21:41.201 } 00:21:41.201 ] 00:21:41.201 }' 00:21:41.201 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.201 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.201 22:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.201 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.201 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:41.201 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.201 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.201 [2024-09-27 22:38:37.051153] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:41.460 [2024-09-27 22:38:37.114834] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:41.460 [2024-09-27 22:38:37.114894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.460 [2024-09-27 22:38:37.114913] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:41.460 [2024-09-27 22:38:37.114921] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.460 "name": "raid_bdev1", 00:21:41.460 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:41.460 "strip_size_kb": 0, 00:21:41.460 "state": "online", 00:21:41.460 "raid_level": "raid1", 00:21:41.460 "superblock": true, 00:21:41.460 "num_base_bdevs": 2, 00:21:41.460 "num_base_bdevs_discovered": 1, 00:21:41.460 "num_base_bdevs_operational": 1, 00:21:41.460 "base_bdevs_list": [ 00:21:41.460 { 00:21:41.460 "name": null, 00:21:41.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.460 "is_configured": false, 00:21:41.460 "data_offset": 0, 00:21:41.460 "data_size": 7936 00:21:41.460 }, 00:21:41.460 { 00:21:41.460 "name": "BaseBdev2", 00:21:41.460 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:41.460 "is_configured": true, 00:21:41.460 "data_offset": 256, 00:21:41.460 "data_size": 7936 00:21:41.460 } 00:21:41.460 ] 00:21:41.460 }' 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.460 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:41.720 "name": "raid_bdev1", 00:21:41.720 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:41.720 "strip_size_kb": 0, 00:21:41.720 "state": "online", 00:21:41.720 "raid_level": "raid1", 00:21:41.720 "superblock": true, 00:21:41.720 "num_base_bdevs": 2, 00:21:41.720 "num_base_bdevs_discovered": 1, 00:21:41.720 "num_base_bdevs_operational": 1, 00:21:41.720 "base_bdevs_list": [ 00:21:41.720 { 00:21:41.720 "name": null, 00:21:41.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.720 "is_configured": false, 00:21:41.720 "data_offset": 0, 00:21:41.720 "data_size": 7936 00:21:41.720 }, 00:21:41.720 { 00:21:41.720 "name": "BaseBdev2", 00:21:41.720 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:41.720 "is_configured": true, 00:21:41.720 "data_offset": 256, 00:21:41.720 "data_size": 7936 00:21:41.720 } 00:21:41.720 ] 00:21:41.720 }' 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:41.720 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:41.979 [2024-09-27 22:38:37.609373] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:41.979 [2024-09-27 22:38:37.609432] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.979 [2024-09-27 22:38:37.609456] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:41.979 [2024-09-27 22:38:37.609468] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.979 [2024-09-27 22:38:37.609684] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.979 [2024-09-27 22:38:37.609705] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:41.979 [2024-09-27 22:38:37.609759] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:41.979 [2024-09-27 22:38:37.609781] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:41.979 [2024-09-27 22:38:37.609793] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:41.979 [2024-09-27 22:38:37.609804] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:41.979 BaseBdev1 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.979 22:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.916 "name": "raid_bdev1", 00:21:42.916 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:42.916 "strip_size_kb": 0, 00:21:42.916 "state": "online", 00:21:42.916 "raid_level": "raid1", 00:21:42.916 "superblock": true, 00:21:42.916 "num_base_bdevs": 2, 00:21:42.916 "num_base_bdevs_discovered": 1, 00:21:42.916 "num_base_bdevs_operational": 1, 00:21:42.916 "base_bdevs_list": [ 00:21:42.916 { 00:21:42.916 "name": null, 00:21:42.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.916 "is_configured": false, 00:21:42.916 "data_offset": 0, 00:21:42.916 "data_size": 7936 00:21:42.916 }, 00:21:42.916 { 00:21:42.916 "name": "BaseBdev2", 00:21:42.916 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:42.916 "is_configured": true, 00:21:42.916 "data_offset": 256, 00:21:42.916 "data_size": 7936 00:21:42.916 } 00:21:42.916 ] 00:21:42.916 }' 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.916 22:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.174 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.443 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.443 "name": "raid_bdev1", 00:21:43.443 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:43.443 "strip_size_kb": 0, 00:21:43.443 "state": "online", 00:21:43.443 "raid_level": "raid1", 00:21:43.443 "superblock": true, 00:21:43.443 "num_base_bdevs": 2, 00:21:43.443 "num_base_bdevs_discovered": 1, 00:21:43.443 "num_base_bdevs_operational": 1, 00:21:43.443 "base_bdevs_list": [ 00:21:43.443 { 00:21:43.443 "name": null, 00:21:43.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.443 "is_configured": false, 00:21:43.443 "data_offset": 0, 00:21:43.443 "data_size": 7936 00:21:43.443 }, 00:21:43.443 { 00:21:43.443 "name": "BaseBdev2", 00:21:43.443 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:43.443 "is_configured": true, 00:21:43.443 "data_offset": 256, 00:21:43.443 "data_size": 7936 00:21:43.443 } 00:21:43.443 ] 00:21:43.443 }' 00:21:43.443 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.443 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:43.444 [2024-09-27 22:38:39.144334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.444 [2024-09-27 22:38:39.144497] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:43.444 [2024-09-27 22:38:39.144515] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:43.444 request: 00:21:43.444 { 00:21:43.444 "base_bdev": "BaseBdev1", 00:21:43.444 "raid_bdev": "raid_bdev1", 00:21:43.444 "method": "bdev_raid_add_base_bdev", 00:21:43.444 "req_id": 1 00:21:43.444 } 00:21:43.444 Got JSON-RPC error response 00:21:43.444 response: 00:21:43.444 { 00:21:43.444 "code": -22, 00:21:43.444 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:43.444 } 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:43.444 22:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.394 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.394 "name": "raid_bdev1", 00:21:44.394 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:44.394 "strip_size_kb": 0, 00:21:44.394 "state": "online", 00:21:44.394 "raid_level": "raid1", 00:21:44.395 "superblock": true, 00:21:44.395 "num_base_bdevs": 2, 00:21:44.395 "num_base_bdevs_discovered": 1, 00:21:44.395 "num_base_bdevs_operational": 1, 00:21:44.395 "base_bdevs_list": [ 00:21:44.395 { 00:21:44.395 "name": null, 00:21:44.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.395 "is_configured": false, 00:21:44.395 "data_offset": 0, 00:21:44.395 "data_size": 7936 00:21:44.395 }, 00:21:44.395 { 00:21:44.395 "name": "BaseBdev2", 00:21:44.395 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:44.395 "is_configured": true, 00:21:44.395 "data_offset": 256, 00:21:44.395 "data_size": 7936 00:21:44.395 } 00:21:44.395 ] 00:21:44.395 }' 00:21:44.395 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.395 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.963 "name": "raid_bdev1", 00:21:44.963 "uuid": "8306bdee-1aa8-423a-8244-2afd236df7e4", 00:21:44.963 "strip_size_kb": 0, 00:21:44.963 "state": "online", 00:21:44.963 "raid_level": "raid1", 00:21:44.963 "superblock": true, 00:21:44.963 "num_base_bdevs": 2, 00:21:44.963 "num_base_bdevs_discovered": 1, 00:21:44.963 "num_base_bdevs_operational": 1, 00:21:44.963 "base_bdevs_list": [ 00:21:44.963 { 00:21:44.963 "name": null, 00:21:44.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.963 "is_configured": false, 00:21:44.963 "data_offset": 0, 00:21:44.963 "data_size": 7936 00:21:44.963 }, 00:21:44.963 { 00:21:44.963 "name": "BaseBdev2", 00:21:44.963 "uuid": "a139e7a1-52df-5ddd-903e-3a8982e6499b", 00:21:44.963 "is_configured": true, 00:21:44.963 "data_offset": 256, 00:21:44.963 "data_size": 7936 00:21:44.963 } 00:21:44.963 ] 00:21:44.963 }' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88957 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88957 ']' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88957 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88957 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88957' 00:21:44.963 killing process with pid 88957 00:21:44.963 Received shutdown signal, test time was about 60.000000 seconds 00:21:44.963 00:21:44.963 Latency(us) 00:21:44.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.963 =================================================================================================================== 00:21:44.963 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88957 00:21:44.963 [2024-09-27 22:38:40.730380] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.963 22:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88957 00:21:44.963 [2024-09-27 22:38:40.730508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.963 [2024-09-27 22:38:40.730557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.963 [2024-09-27 22:38:40.730571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:45.222 [2024-09-27 22:38:41.055630] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.123 22:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:47.123 00:21:47.123 real 0m20.141s 00:21:47.123 user 0m25.373s 00:21:47.123 sys 0m2.811s 00:21:47.123 22:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.123 22:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:47.123 ************************************ 00:21:47.123 END TEST raid_rebuild_test_sb_md_separate 00:21:47.123 ************************************ 00:21:47.383 22:38:43 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:47.383 22:38:43 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:47.383 22:38:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:47.383 22:38:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.383 22:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.383 ************************************ 00:21:47.383 START TEST raid_state_function_test_sb_md_interleaved 00:21:47.383 ************************************ 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89649 00:21:47.383 Process raid pid: 89649 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89649' 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89649 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89649 ']' 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.383 22:38:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:47.383 [2024-09-27 22:38:43.132110] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:21:47.383 [2024-09-27 22:38:43.132232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.642 [2024-09-27 22:38:43.303237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.901 [2024-09-27 22:38:43.522811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.901 [2024-09-27 22:38:43.759215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.901 [2024-09-27 22:38:43.759252] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.471 [2024-09-27 22:38:44.226368] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:48.471 [2024-09-27 22:38:44.226424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:48.471 [2024-09-27 22:38:44.226435] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:48.471 [2024-09-27 22:38:44.226448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.471 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.472 "name": "Existed_Raid", 00:21:48.472 "uuid": "8cd49221-eda9-4245-9deb-5783f984afd4", 00:21:48.472 "strip_size_kb": 0, 00:21:48.472 "state": "configuring", 00:21:48.472 "raid_level": "raid1", 00:21:48.472 "superblock": true, 00:21:48.472 "num_base_bdevs": 2, 00:21:48.472 "num_base_bdevs_discovered": 0, 00:21:48.472 "num_base_bdevs_operational": 2, 00:21:48.472 "base_bdevs_list": [ 00:21:48.472 { 00:21:48.472 "name": "BaseBdev1", 00:21:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.472 "is_configured": false, 00:21:48.472 "data_offset": 0, 00:21:48.472 "data_size": 0 00:21:48.472 }, 00:21:48.472 { 00:21:48.472 "name": "BaseBdev2", 00:21:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.472 "is_configured": false, 00:21:48.472 "data_offset": 0, 00:21:48.472 "data_size": 0 00:21:48.472 } 00:21:48.472 ] 00:21:48.472 }' 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.472 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.039 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 [2024-09-27 22:38:44.621755] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:49.040 [2024-09-27 22:38:44.621800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 [2024-09-27 22:38:44.629762] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:49.040 [2024-09-27 22:38:44.629808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:49.040 [2024-09-27 22:38:44.629817] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.040 [2024-09-27 22:38:44.629832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 [2024-09-27 22:38:44.680084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.040 BaseBdev1 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 [ 00:21:49.040 { 00:21:49.040 "name": "BaseBdev1", 00:21:49.040 "aliases": [ 00:21:49.040 "dc9f1788-7a40-498e-9b3f-d087fad40721" 00:21:49.040 ], 00:21:49.040 "product_name": "Malloc disk", 00:21:49.040 "block_size": 4128, 00:21:49.040 "num_blocks": 8192, 00:21:49.040 "uuid": "dc9f1788-7a40-498e-9b3f-d087fad40721", 00:21:49.040 "md_size": 32, 00:21:49.040 "md_interleave": true, 00:21:49.040 "dif_type": 0, 00:21:49.040 "assigned_rate_limits": { 00:21:49.040 "rw_ios_per_sec": 0, 00:21:49.040 "rw_mbytes_per_sec": 0, 00:21:49.040 "r_mbytes_per_sec": 0, 00:21:49.040 "w_mbytes_per_sec": 0 00:21:49.040 }, 00:21:49.040 "claimed": true, 00:21:49.040 "claim_type": "exclusive_write", 00:21:49.040 "zoned": false, 00:21:49.040 "supported_io_types": { 00:21:49.040 "read": true, 00:21:49.040 "write": true, 00:21:49.040 "unmap": true, 00:21:49.040 "flush": true, 00:21:49.040 "reset": true, 00:21:49.040 "nvme_admin": false, 00:21:49.040 "nvme_io": false, 00:21:49.040 "nvme_io_md": false, 00:21:49.040 "write_zeroes": true, 00:21:49.040 "zcopy": true, 00:21:49.040 "get_zone_info": false, 00:21:49.040 "zone_management": false, 00:21:49.040 "zone_append": false, 00:21:49.040 "compare": false, 00:21:49.040 "compare_and_write": false, 00:21:49.040 "abort": true, 00:21:49.040 "seek_hole": false, 00:21:49.040 "seek_data": false, 00:21:49.040 "copy": true, 00:21:49.040 "nvme_iov_md": false 00:21:49.040 }, 00:21:49.040 "memory_domains": [ 00:21:49.040 { 00:21:49.040 "dma_device_id": "system", 00:21:49.040 "dma_device_type": 1 00:21:49.040 }, 00:21:49.040 { 00:21:49.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.040 "dma_device_type": 2 00:21:49.040 } 00:21:49.040 ], 00:21:49.040 "driver_specific": {} 00:21:49.040 } 00:21:49.040 ] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.040 "name": "Existed_Raid", 00:21:49.040 "uuid": "2dc0d7a1-97a3-4dab-9849-dbbdb9779efb", 00:21:49.040 "strip_size_kb": 0, 00:21:49.040 "state": "configuring", 00:21:49.040 "raid_level": "raid1", 00:21:49.040 "superblock": true, 00:21:49.040 "num_base_bdevs": 2, 00:21:49.040 "num_base_bdevs_discovered": 1, 00:21:49.040 "num_base_bdevs_operational": 2, 00:21:49.040 "base_bdevs_list": [ 00:21:49.040 { 00:21:49.040 "name": "BaseBdev1", 00:21:49.040 "uuid": "dc9f1788-7a40-498e-9b3f-d087fad40721", 00:21:49.040 "is_configured": true, 00:21:49.040 "data_offset": 256, 00:21:49.040 "data_size": 7936 00:21:49.040 }, 00:21:49.040 { 00:21:49.040 "name": "BaseBdev2", 00:21:49.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.040 "is_configured": false, 00:21:49.040 "data_offset": 0, 00:21:49.040 "data_size": 0 00:21:49.040 } 00:21:49.040 ] 00:21:49.040 }' 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.040 22:38:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.308 [2024-09-27 22:38:45.107649] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:49.308 [2024-09-27 22:38:45.107700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.308 [2024-09-27 22:38:45.115692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.308 [2024-09-27 22:38:45.117755] bdev.c:8309:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:49.308 [2024-09-27 22:38:45.117804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.308 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.309 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.309 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.309 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.309 "name": "Existed_Raid", 00:21:49.309 "uuid": "14e21e52-3fdf-46e0-af20-c7e0aabcf5b2", 00:21:49.309 "strip_size_kb": 0, 00:21:49.309 "state": "configuring", 00:21:49.309 "raid_level": "raid1", 00:21:49.309 "superblock": true, 00:21:49.309 "num_base_bdevs": 2, 00:21:49.309 "num_base_bdevs_discovered": 1, 00:21:49.309 "num_base_bdevs_operational": 2, 00:21:49.309 "base_bdevs_list": [ 00:21:49.309 { 00:21:49.309 "name": "BaseBdev1", 00:21:49.309 "uuid": "dc9f1788-7a40-498e-9b3f-d087fad40721", 00:21:49.309 "is_configured": true, 00:21:49.309 "data_offset": 256, 00:21:49.309 "data_size": 7936 00:21:49.309 }, 00:21:49.309 { 00:21:49.309 "name": "BaseBdev2", 00:21:49.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.309 "is_configured": false, 00:21:49.309 "data_offset": 0, 00:21:49.309 "data_size": 0 00:21:49.309 } 00:21:49.309 ] 00:21:49.309 }' 00:21:49.309 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.309 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.896 [2024-09-27 22:38:45.570226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.896 [2024-09-27 22:38:45.570429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:49.896 [2024-09-27 22:38:45.570443] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:49.896 [2024-09-27 22:38:45.570530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:49.896 [2024-09-27 22:38:45.570609] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:49.896 [2024-09-27 22:38:45.570624] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:49.896 BaseBdev2 00:21:49.896 [2024-09-27 22:38:45.570680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.896 [ 00:21:49.896 { 00:21:49.896 "name": "BaseBdev2", 00:21:49.896 "aliases": [ 00:21:49.896 "e90d7dc0-419e-4c03-8005-87c55a95ceaf" 00:21:49.896 ], 00:21:49.896 "product_name": "Malloc disk", 00:21:49.896 "block_size": 4128, 00:21:49.896 "num_blocks": 8192, 00:21:49.896 "uuid": "e90d7dc0-419e-4c03-8005-87c55a95ceaf", 00:21:49.896 "md_size": 32, 00:21:49.896 "md_interleave": true, 00:21:49.896 "dif_type": 0, 00:21:49.896 "assigned_rate_limits": { 00:21:49.896 "rw_ios_per_sec": 0, 00:21:49.896 "rw_mbytes_per_sec": 0, 00:21:49.896 "r_mbytes_per_sec": 0, 00:21:49.896 "w_mbytes_per_sec": 0 00:21:49.896 }, 00:21:49.896 "claimed": true, 00:21:49.896 "claim_type": "exclusive_write", 00:21:49.896 "zoned": false, 00:21:49.896 "supported_io_types": { 00:21:49.896 "read": true, 00:21:49.896 "write": true, 00:21:49.896 "unmap": true, 00:21:49.896 "flush": true, 00:21:49.896 "reset": true, 00:21:49.896 "nvme_admin": false, 00:21:49.896 "nvme_io": false, 00:21:49.896 "nvme_io_md": false, 00:21:49.896 "write_zeroes": true, 00:21:49.896 "zcopy": true, 00:21:49.896 "get_zone_info": false, 00:21:49.896 "zone_management": false, 00:21:49.896 "zone_append": false, 00:21:49.896 "compare": false, 00:21:49.896 "compare_and_write": false, 00:21:49.896 "abort": true, 00:21:49.896 "seek_hole": false, 00:21:49.896 "seek_data": false, 00:21:49.896 "copy": true, 00:21:49.896 "nvme_iov_md": false 00:21:49.896 }, 00:21:49.896 "memory_domains": [ 00:21:49.896 { 00:21:49.896 "dma_device_id": "system", 00:21:49.896 "dma_device_type": 1 00:21:49.896 }, 00:21:49.896 { 00:21:49.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.896 "dma_device_type": 2 00:21:49.896 } 00:21:49.896 ], 00:21:49.896 "driver_specific": {} 00:21:49.896 } 00:21:49.896 ] 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:49.896 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.897 "name": "Existed_Raid", 00:21:49.897 "uuid": "14e21e52-3fdf-46e0-af20-c7e0aabcf5b2", 00:21:49.897 "strip_size_kb": 0, 00:21:49.897 "state": "online", 00:21:49.897 "raid_level": "raid1", 00:21:49.897 "superblock": true, 00:21:49.897 "num_base_bdevs": 2, 00:21:49.897 "num_base_bdevs_discovered": 2, 00:21:49.897 "num_base_bdevs_operational": 2, 00:21:49.897 "base_bdevs_list": [ 00:21:49.897 { 00:21:49.897 "name": "BaseBdev1", 00:21:49.897 "uuid": "dc9f1788-7a40-498e-9b3f-d087fad40721", 00:21:49.897 "is_configured": true, 00:21:49.897 "data_offset": 256, 00:21:49.897 "data_size": 7936 00:21:49.897 }, 00:21:49.897 { 00:21:49.897 "name": "BaseBdev2", 00:21:49.897 "uuid": "e90d7dc0-419e-4c03-8005-87c55a95ceaf", 00:21:49.897 "is_configured": true, 00:21:49.897 "data_offset": 256, 00:21:49.897 "data_size": 7936 00:21:49.897 } 00:21:49.897 ] 00:21:49.897 }' 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.897 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.155 22:38:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.155 [2024-09-27 22:38:45.978050] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.155 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.155 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:50.155 "name": "Existed_Raid", 00:21:50.155 "aliases": [ 00:21:50.155 "14e21e52-3fdf-46e0-af20-c7e0aabcf5b2" 00:21:50.155 ], 00:21:50.155 "product_name": "Raid Volume", 00:21:50.155 "block_size": 4128, 00:21:50.155 "num_blocks": 7936, 00:21:50.155 "uuid": "14e21e52-3fdf-46e0-af20-c7e0aabcf5b2", 00:21:50.155 "md_size": 32, 00:21:50.155 "md_interleave": true, 00:21:50.155 "dif_type": 0, 00:21:50.155 "assigned_rate_limits": { 00:21:50.155 "rw_ios_per_sec": 0, 00:21:50.155 "rw_mbytes_per_sec": 0, 00:21:50.155 "r_mbytes_per_sec": 0, 00:21:50.155 "w_mbytes_per_sec": 0 00:21:50.155 }, 00:21:50.155 "claimed": false, 00:21:50.155 "zoned": false, 00:21:50.155 "supported_io_types": { 00:21:50.155 "read": true, 00:21:50.155 "write": true, 00:21:50.155 "unmap": false, 00:21:50.155 "flush": false, 00:21:50.155 "reset": true, 00:21:50.155 "nvme_admin": false, 00:21:50.155 "nvme_io": false, 00:21:50.155 "nvme_io_md": false, 00:21:50.155 "write_zeroes": true, 00:21:50.155 "zcopy": false, 00:21:50.155 "get_zone_info": false, 00:21:50.155 "zone_management": false, 00:21:50.155 "zone_append": false, 00:21:50.155 "compare": false, 00:21:50.155 "compare_and_write": false, 00:21:50.155 "abort": false, 00:21:50.155 "seek_hole": false, 00:21:50.155 "seek_data": false, 00:21:50.155 "copy": false, 00:21:50.155 "nvme_iov_md": false 00:21:50.155 }, 00:21:50.155 "memory_domains": [ 00:21:50.155 { 00:21:50.155 "dma_device_id": "system", 00:21:50.155 "dma_device_type": 1 00:21:50.155 }, 00:21:50.155 { 00:21:50.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.155 "dma_device_type": 2 00:21:50.155 }, 00:21:50.155 { 00:21:50.155 "dma_device_id": "system", 00:21:50.155 "dma_device_type": 1 00:21:50.155 }, 00:21:50.155 { 00:21:50.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.155 "dma_device_type": 2 00:21:50.155 } 00:21:50.155 ], 00:21:50.155 "driver_specific": { 00:21:50.155 "raid": { 00:21:50.155 "uuid": "14e21e52-3fdf-46e0-af20-c7e0aabcf5b2", 00:21:50.155 "strip_size_kb": 0, 00:21:50.155 "state": "online", 00:21:50.155 "raid_level": "raid1", 00:21:50.155 "superblock": true, 00:21:50.155 "num_base_bdevs": 2, 00:21:50.155 "num_base_bdevs_discovered": 2, 00:21:50.155 "num_base_bdevs_operational": 2, 00:21:50.155 "base_bdevs_list": [ 00:21:50.155 { 00:21:50.155 "name": "BaseBdev1", 00:21:50.155 "uuid": "dc9f1788-7a40-498e-9b3f-d087fad40721", 00:21:50.155 "is_configured": true, 00:21:50.155 "data_offset": 256, 00:21:50.155 "data_size": 7936 00:21:50.155 }, 00:21:50.155 { 00:21:50.155 "name": "BaseBdev2", 00:21:50.155 "uuid": "e90d7dc0-419e-4c03-8005-87c55a95ceaf", 00:21:50.155 "is_configured": true, 00:21:50.155 "data_offset": 256, 00:21:50.155 "data_size": 7936 00:21:50.155 } 00:21:50.155 ] 00:21:50.155 } 00:21:50.155 } 00:21:50.155 }' 00:21:50.156 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:50.413 BaseBdev2' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.413 [2024-09-27 22:38:46.169529] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.413 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.414 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.672 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.672 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.672 "name": "Existed_Raid", 00:21:50.672 "uuid": "14e21e52-3fdf-46e0-af20-c7e0aabcf5b2", 00:21:50.672 "strip_size_kb": 0, 00:21:50.672 "state": "online", 00:21:50.672 "raid_level": "raid1", 00:21:50.672 "superblock": true, 00:21:50.672 "num_base_bdevs": 2, 00:21:50.672 "num_base_bdevs_discovered": 1, 00:21:50.672 "num_base_bdevs_operational": 1, 00:21:50.672 "base_bdevs_list": [ 00:21:50.672 { 00:21:50.672 "name": null, 00:21:50.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.672 "is_configured": false, 00:21:50.672 "data_offset": 0, 00:21:50.672 "data_size": 7936 00:21:50.672 }, 00:21:50.672 { 00:21:50.672 "name": "BaseBdev2", 00:21:50.672 "uuid": "e90d7dc0-419e-4c03-8005-87c55a95ceaf", 00:21:50.672 "is_configured": true, 00:21:50.672 "data_offset": 256, 00:21:50.672 "data_size": 7936 00:21:50.672 } 00:21:50.672 ] 00:21:50.672 }' 00:21:50.672 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.672 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.931 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.931 [2024-09-27 22:38:46.716791] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:50.931 [2024-09-27 22:38:46.716894] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.190 [2024-09-27 22:38:46.813118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.190 [2024-09-27 22:38:46.813168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.190 [2024-09-27 22:38:46.813184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89649 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89649 ']' 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89649 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89649 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:51.190 killing process with pid 89649 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89649' 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89649 00:21:51.190 [2024-09-27 22:38:46.907589] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:51.190 22:38:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89649 00:21:51.190 [2024-09-27 22:38:46.924932] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:53.091 22:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:53.091 00:21:53.091 real 0m5.827s 00:21:53.091 user 0m7.643s 00:21:53.091 sys 0m1.039s 00:21:53.091 22:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.091 22:38:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.091 ************************************ 00:21:53.091 END TEST raid_state_function_test_sb_md_interleaved 00:21:53.091 ************************************ 00:21:53.091 22:38:48 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:53.091 22:38:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:53.091 22:38:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.091 22:38:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.091 ************************************ 00:21:53.091 START TEST raid_superblock_test_md_interleaved 00:21:53.091 ************************************ 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89910 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89910 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89910 ']' 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.091 22:38:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.350 [2024-09-27 22:38:49.030619] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:21:53.350 [2024-09-27 22:38:49.030786] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89910 ] 00:21:53.350 [2024-09-27 22:38:49.201448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.608 [2024-09-27 22:38:49.415695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.866 [2024-09-27 22:38:49.647929] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.866 [2024-09-27 22:38:49.647993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.433 malloc1 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:54.433 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.434 [2024-09-27 22:38:50.172942] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:54.434 [2024-09-27 22:38:50.173017] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.434 [2024-09-27 22:38:50.173046] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:54.434 [2024-09-27 22:38:50.173058] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.434 [2024-09-27 22:38:50.175118] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.434 [2024-09-27 22:38:50.175156] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:54.434 pt1 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.434 malloc2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.434 [2024-09-27 22:38:50.233566] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:54.434 [2024-09-27 22:38:50.233621] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.434 [2024-09-27 22:38:50.233662] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:54.434 [2024-09-27 22:38:50.233673] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.434 [2024-09-27 22:38:50.235724] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.434 [2024-09-27 22:38:50.235760] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:54.434 pt2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.434 [2024-09-27 22:38:50.245626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:54.434 [2024-09-27 22:38:50.247644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:54.434 [2024-09-27 22:38:50.247842] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:54.434 [2024-09-27 22:38:50.247866] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:54.434 [2024-09-27 22:38:50.247936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:54.434 [2024-09-27 22:38:50.248028] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:54.434 [2024-09-27 22:38:50.248046] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:54.434 [2024-09-27 22:38:50.248112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.434 "name": "raid_bdev1", 00:21:54.434 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:54.434 "strip_size_kb": 0, 00:21:54.434 "state": "online", 00:21:54.434 "raid_level": "raid1", 00:21:54.434 "superblock": true, 00:21:54.434 "num_base_bdevs": 2, 00:21:54.434 "num_base_bdevs_discovered": 2, 00:21:54.434 "num_base_bdevs_operational": 2, 00:21:54.434 "base_bdevs_list": [ 00:21:54.434 { 00:21:54.434 "name": "pt1", 00:21:54.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:54.434 "is_configured": true, 00:21:54.434 "data_offset": 256, 00:21:54.434 "data_size": 7936 00:21:54.434 }, 00:21:54.434 { 00:21:54.434 "name": "pt2", 00:21:54.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.434 "is_configured": true, 00:21:54.434 "data_offset": 256, 00:21:54.434 "data_size": 7936 00:21:54.434 } 00:21:54.434 ] 00:21:54.434 }' 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.434 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:55.034 [2024-09-27 22:38:50.673294] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.034 "name": "raid_bdev1", 00:21:55.034 "aliases": [ 00:21:55.034 "72bdc72d-24d2-4f18-90dd-24653517fb4e" 00:21:55.034 ], 00:21:55.034 "product_name": "Raid Volume", 00:21:55.034 "block_size": 4128, 00:21:55.034 "num_blocks": 7936, 00:21:55.034 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:55.034 "md_size": 32, 00:21:55.034 "md_interleave": true, 00:21:55.034 "dif_type": 0, 00:21:55.034 "assigned_rate_limits": { 00:21:55.034 "rw_ios_per_sec": 0, 00:21:55.034 "rw_mbytes_per_sec": 0, 00:21:55.034 "r_mbytes_per_sec": 0, 00:21:55.034 "w_mbytes_per_sec": 0 00:21:55.034 }, 00:21:55.034 "claimed": false, 00:21:55.034 "zoned": false, 00:21:55.034 "supported_io_types": { 00:21:55.034 "read": true, 00:21:55.034 "write": true, 00:21:55.034 "unmap": false, 00:21:55.034 "flush": false, 00:21:55.034 "reset": true, 00:21:55.034 "nvme_admin": false, 00:21:55.034 "nvme_io": false, 00:21:55.034 "nvme_io_md": false, 00:21:55.034 "write_zeroes": true, 00:21:55.034 "zcopy": false, 00:21:55.034 "get_zone_info": false, 00:21:55.034 "zone_management": false, 00:21:55.034 "zone_append": false, 00:21:55.034 "compare": false, 00:21:55.034 "compare_and_write": false, 00:21:55.034 "abort": false, 00:21:55.034 "seek_hole": false, 00:21:55.034 "seek_data": false, 00:21:55.034 "copy": false, 00:21:55.034 "nvme_iov_md": false 00:21:55.034 }, 00:21:55.034 "memory_domains": [ 00:21:55.034 { 00:21:55.034 "dma_device_id": "system", 00:21:55.034 "dma_device_type": 1 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.034 "dma_device_type": 2 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "dma_device_id": "system", 00:21:55.034 "dma_device_type": 1 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.034 "dma_device_type": 2 00:21:55.034 } 00:21:55.034 ], 00:21:55.034 "driver_specific": { 00:21:55.034 "raid": { 00:21:55.034 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:55.034 "strip_size_kb": 0, 00:21:55.034 "state": "online", 00:21:55.034 "raid_level": "raid1", 00:21:55.034 "superblock": true, 00:21:55.034 "num_base_bdevs": 2, 00:21:55.034 "num_base_bdevs_discovered": 2, 00:21:55.034 "num_base_bdevs_operational": 2, 00:21:55.034 "base_bdevs_list": [ 00:21:55.034 { 00:21:55.034 "name": "pt1", 00:21:55.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.034 "is_configured": true, 00:21:55.034 "data_offset": 256, 00:21:55.034 "data_size": 7936 00:21:55.034 }, 00:21:55.034 { 00:21:55.034 "name": "pt2", 00:21:55.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.034 "is_configured": true, 00:21:55.034 "data_offset": 256, 00:21:55.034 "data_size": 7936 00:21:55.034 } 00:21:55.034 ] 00:21:55.034 } 00:21:55.034 } 00:21:55.034 }' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:55.034 pt2' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.034 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 [2024-09-27 22:38:50.880947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=72bdc72d-24d2-4f18-90dd-24653517fb4e 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 72bdc72d-24d2-4f18-90dd-24653517fb4e ']' 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 [2024-09-27 22:38:50.920628] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.294 [2024-09-27 22:38:50.920656] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.294 [2024-09-27 22:38:50.920724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.294 [2024-09-27 22:38:50.920788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.294 [2024-09-27 22:38:50.920801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 22:38:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 [2024-09-27 22:38:51.048488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:55.294 [2024-09-27 22:38:51.050672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:55.294 [2024-09-27 22:38:51.050747] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:55.294 [2024-09-27 22:38:51.050801] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:55.294 [2024-09-27 22:38:51.050818] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.294 [2024-09-27 22:38:51.050832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:55.294 request: 00:21:55.294 { 00:21:55.294 "name": "raid_bdev1", 00:21:55.294 "raid_level": "raid1", 00:21:55.294 "base_bdevs": [ 00:21:55.294 "malloc1", 00:21:55.294 "malloc2" 00:21:55.294 ], 00:21:55.294 "superblock": false, 00:21:55.294 "method": "bdev_raid_create", 00:21:55.294 "req_id": 1 00:21:55.294 } 00:21:55.294 Got JSON-RPC error response 00:21:55.294 response: 00:21:55.294 { 00:21:55.294 "code": -17, 00:21:55.294 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:55.294 } 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.294 [2024-09-27 22:38:51.108370] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:55.294 [2024-09-27 22:38:51.108424] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.294 [2024-09-27 22:38:51.108442] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:55.294 [2024-09-27 22:38:51.108455] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.294 [2024-09-27 22:38:51.110551] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.294 [2024-09-27 22:38:51.110592] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:55.294 [2024-09-27 22:38:51.110641] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:55.294 [2024-09-27 22:38:51.110707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.294 pt1 00:21:55.294 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.295 "name": "raid_bdev1", 00:21:55.295 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:55.295 "strip_size_kb": 0, 00:21:55.295 "state": "configuring", 00:21:55.295 "raid_level": "raid1", 00:21:55.295 "superblock": true, 00:21:55.295 "num_base_bdevs": 2, 00:21:55.295 "num_base_bdevs_discovered": 1, 00:21:55.295 "num_base_bdevs_operational": 2, 00:21:55.295 "base_bdevs_list": [ 00:21:55.295 { 00:21:55.295 "name": "pt1", 00:21:55.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.295 "is_configured": true, 00:21:55.295 "data_offset": 256, 00:21:55.295 "data_size": 7936 00:21:55.295 }, 00:21:55.295 { 00:21:55.295 "name": null, 00:21:55.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.295 "is_configured": false, 00:21:55.295 "data_offset": 256, 00:21:55.295 "data_size": 7936 00:21:55.295 } 00:21:55.295 ] 00:21:55.295 }' 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.295 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.862 [2024-09-27 22:38:51.539873] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:55.862 [2024-09-27 22:38:51.539946] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.862 [2024-09-27 22:38:51.539969] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:55.862 [2024-09-27 22:38:51.539993] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.862 [2024-09-27 22:38:51.540160] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.862 [2024-09-27 22:38:51.540180] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:55.862 [2024-09-27 22:38:51.540229] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:55.862 [2024-09-27 22:38:51.540259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.862 [2024-09-27 22:38:51.540353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:55.862 [2024-09-27 22:38:51.540366] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:55.862 [2024-09-27 22:38:51.540448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:55.862 [2024-09-27 22:38:51.540518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:55.862 [2024-09-27 22:38:51.540527] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:55.862 [2024-09-27 22:38:51.540596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.862 pt2 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.862 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.862 "name": "raid_bdev1", 00:21:55.862 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:55.862 "strip_size_kb": 0, 00:21:55.862 "state": "online", 00:21:55.862 "raid_level": "raid1", 00:21:55.862 "superblock": true, 00:21:55.862 "num_base_bdevs": 2, 00:21:55.862 "num_base_bdevs_discovered": 2, 00:21:55.862 "num_base_bdevs_operational": 2, 00:21:55.862 "base_bdevs_list": [ 00:21:55.862 { 00:21:55.862 "name": "pt1", 00:21:55.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.862 "is_configured": true, 00:21:55.862 "data_offset": 256, 00:21:55.862 "data_size": 7936 00:21:55.862 }, 00:21:55.862 { 00:21:55.863 "name": "pt2", 00:21:55.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.863 "is_configured": true, 00:21:55.863 "data_offset": 256, 00:21:55.863 "data_size": 7936 00:21:55.863 } 00:21:55.863 ] 00:21:55.863 }' 00:21:55.863 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.863 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.121 [2024-09-27 22:38:51.963512] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.121 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:56.121 "name": "raid_bdev1", 00:21:56.121 "aliases": [ 00:21:56.121 "72bdc72d-24d2-4f18-90dd-24653517fb4e" 00:21:56.121 ], 00:21:56.121 "product_name": "Raid Volume", 00:21:56.121 "block_size": 4128, 00:21:56.121 "num_blocks": 7936, 00:21:56.121 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:56.121 "md_size": 32, 00:21:56.121 "md_interleave": true, 00:21:56.121 "dif_type": 0, 00:21:56.121 "assigned_rate_limits": { 00:21:56.121 "rw_ios_per_sec": 0, 00:21:56.121 "rw_mbytes_per_sec": 0, 00:21:56.121 "r_mbytes_per_sec": 0, 00:21:56.121 "w_mbytes_per_sec": 0 00:21:56.121 }, 00:21:56.121 "claimed": false, 00:21:56.121 "zoned": false, 00:21:56.121 "supported_io_types": { 00:21:56.121 "read": true, 00:21:56.121 "write": true, 00:21:56.121 "unmap": false, 00:21:56.121 "flush": false, 00:21:56.121 "reset": true, 00:21:56.121 "nvme_admin": false, 00:21:56.121 "nvme_io": false, 00:21:56.121 "nvme_io_md": false, 00:21:56.121 "write_zeroes": true, 00:21:56.121 "zcopy": false, 00:21:56.121 "get_zone_info": false, 00:21:56.121 "zone_management": false, 00:21:56.121 "zone_append": false, 00:21:56.121 "compare": false, 00:21:56.121 "compare_and_write": false, 00:21:56.121 "abort": false, 00:21:56.121 "seek_hole": false, 00:21:56.121 "seek_data": false, 00:21:56.121 "copy": false, 00:21:56.121 "nvme_iov_md": false 00:21:56.121 }, 00:21:56.121 "memory_domains": [ 00:21:56.121 { 00:21:56.121 "dma_device_id": "system", 00:21:56.121 "dma_device_type": 1 00:21:56.121 }, 00:21:56.121 { 00:21:56.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.121 "dma_device_type": 2 00:21:56.121 }, 00:21:56.121 { 00:21:56.121 "dma_device_id": "system", 00:21:56.121 "dma_device_type": 1 00:21:56.121 }, 00:21:56.121 { 00:21:56.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.121 "dma_device_type": 2 00:21:56.121 } 00:21:56.121 ], 00:21:56.121 "driver_specific": { 00:21:56.121 "raid": { 00:21:56.121 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:56.121 "strip_size_kb": 0, 00:21:56.121 "state": "online", 00:21:56.121 "raid_level": "raid1", 00:21:56.121 "superblock": true, 00:21:56.121 "num_base_bdevs": 2, 00:21:56.121 "num_base_bdevs_discovered": 2, 00:21:56.121 "num_base_bdevs_operational": 2, 00:21:56.121 "base_bdevs_list": [ 00:21:56.121 { 00:21:56.121 "name": "pt1", 00:21:56.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.121 "is_configured": true, 00:21:56.121 "data_offset": 256, 00:21:56.121 "data_size": 7936 00:21:56.121 }, 00:21:56.121 { 00:21:56.121 "name": "pt2", 00:21:56.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.121 "is_configured": true, 00:21:56.121 "data_offset": 256, 00:21:56.121 "data_size": 7936 00:21:56.121 } 00:21:56.121 ] 00:21:56.121 } 00:21:56.121 } 00:21:56.121 }' 00:21:56.379 22:38:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:56.379 pt2' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:56.379 [2024-09-27 22:38:52.179203] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 72bdc72d-24d2-4f18-90dd-24653517fb4e '!=' 72bdc72d-24d2-4f18-90dd-24653517fb4e ']' 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.379 [2024-09-27 22:38:52.222948] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.379 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.380 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.380 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.380 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.380 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.637 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.637 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.637 "name": "raid_bdev1", 00:21:56.637 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:56.637 "strip_size_kb": 0, 00:21:56.637 "state": "online", 00:21:56.637 "raid_level": "raid1", 00:21:56.637 "superblock": true, 00:21:56.637 "num_base_bdevs": 2, 00:21:56.637 "num_base_bdevs_discovered": 1, 00:21:56.637 "num_base_bdevs_operational": 1, 00:21:56.637 "base_bdevs_list": [ 00:21:56.637 { 00:21:56.637 "name": null, 00:21:56.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.637 "is_configured": false, 00:21:56.637 "data_offset": 0, 00:21:56.637 "data_size": 7936 00:21:56.637 }, 00:21:56.637 { 00:21:56.637 "name": "pt2", 00:21:56.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.637 "is_configured": true, 00:21:56.637 "data_offset": 256, 00:21:56.637 "data_size": 7936 00:21:56.637 } 00:21:56.637 ] 00:21:56.637 }' 00:21:56.637 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.637 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.896 [2024-09-27 22:38:52.626346] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.896 [2024-09-27 22:38:52.626381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.896 [2024-09-27 22:38:52.626452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.896 [2024-09-27 22:38:52.626496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.896 [2024-09-27 22:38:52.626510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.896 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.897 [2024-09-27 22:38:52.686252] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:56.897 [2024-09-27 22:38:52.686309] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.897 [2024-09-27 22:38:52.686327] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:56.897 [2024-09-27 22:38:52.686341] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.897 [2024-09-27 22:38:52.688502] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.897 [2024-09-27 22:38:52.688545] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:56.897 [2024-09-27 22:38:52.688597] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:56.897 [2024-09-27 22:38:52.688642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:56.897 [2024-09-27 22:38:52.688704] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:56.897 [2024-09-27 22:38:52.688718] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:56.897 [2024-09-27 22:38:52.688805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:56.897 [2024-09-27 22:38:52.688866] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:56.897 [2024-09-27 22:38:52.688874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:56.897 [2024-09-27 22:38:52.688932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.897 pt2 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.897 "name": "raid_bdev1", 00:21:56.897 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:56.897 "strip_size_kb": 0, 00:21:56.897 "state": "online", 00:21:56.897 "raid_level": "raid1", 00:21:56.897 "superblock": true, 00:21:56.897 "num_base_bdevs": 2, 00:21:56.897 "num_base_bdevs_discovered": 1, 00:21:56.897 "num_base_bdevs_operational": 1, 00:21:56.897 "base_bdevs_list": [ 00:21:56.897 { 00:21:56.897 "name": null, 00:21:56.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.897 "is_configured": false, 00:21:56.897 "data_offset": 256, 00:21:56.897 "data_size": 7936 00:21:56.897 }, 00:21:56.897 { 00:21:56.897 "name": "pt2", 00:21:56.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.897 "is_configured": true, 00:21:56.897 "data_offset": 256, 00:21:56.897 "data_size": 7936 00:21:56.897 } 00:21:56.897 ] 00:21:56.897 }' 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.897 22:38:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.464 [2024-09-27 22:38:53.097655] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.464 [2024-09-27 22:38:53.097803] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.464 [2024-09-27 22:38:53.097935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.464 [2024-09-27 22:38:53.098087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.464 [2024-09-27 22:38:53.098194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.464 [2024-09-27 22:38:53.153595] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:57.464 [2024-09-27 22:38:53.153647] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.464 [2024-09-27 22:38:53.153667] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:57.464 [2024-09-27 22:38:53.153678] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.464 [2024-09-27 22:38:53.155784] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.464 [2024-09-27 22:38:53.155822] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:57.464 [2024-09-27 22:38:53.155875] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:57.464 [2024-09-27 22:38:53.155929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:57.464 [2024-09-27 22:38:53.156036] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:57.464 [2024-09-27 22:38:53.156052] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.464 [2024-09-27 22:38:53.156081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:57.464 [2024-09-27 22:38:53.156138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:57.464 [2024-09-27 22:38:53.156210] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:57.464 [2024-09-27 22:38:53.156219] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:57.464 [2024-09-27 22:38:53.156277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:57.464 [2024-09-27 22:38:53.156337] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:57.464 [2024-09-27 22:38:53.156349] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:57.464 [2024-09-27 22:38:53.156410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.464 pt1 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.464 "name": "raid_bdev1", 00:21:57.464 "uuid": "72bdc72d-24d2-4f18-90dd-24653517fb4e", 00:21:57.464 "strip_size_kb": 0, 00:21:57.464 "state": "online", 00:21:57.464 "raid_level": "raid1", 00:21:57.464 "superblock": true, 00:21:57.464 "num_base_bdevs": 2, 00:21:57.464 "num_base_bdevs_discovered": 1, 00:21:57.464 "num_base_bdevs_operational": 1, 00:21:57.464 "base_bdevs_list": [ 00:21:57.464 { 00:21:57.464 "name": null, 00:21:57.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.464 "is_configured": false, 00:21:57.464 "data_offset": 256, 00:21:57.464 "data_size": 7936 00:21:57.464 }, 00:21:57.464 { 00:21:57.464 "name": "pt2", 00:21:57.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.464 "is_configured": true, 00:21:57.464 "data_offset": 256, 00:21:57.464 "data_size": 7936 00:21:57.464 } 00:21:57.464 ] 00:21:57.464 }' 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.464 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.723 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:57.723 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:57.723 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.723 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.723 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.981 [2024-09-27 22:38:53.629168] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 72bdc72d-24d2-4f18-90dd-24653517fb4e '!=' 72bdc72d-24d2-4f18-90dd-24653517fb4e ']' 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89910 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89910 ']' 00:21:57.981 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89910 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89910 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.982 killing process with pid 89910 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89910' 00:21:57.982 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89910 00:21:57.982 [2024-09-27 22:38:53.709468] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.982 [2024-09-27 22:38:53.709550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.982 [2024-09-27 22:38:53.709595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.982 [2024-09-27 22:38:53.709614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 22:38:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89910 00:21:57.982 te offline 00:21:58.240 [2024-09-27 22:38:53.917781] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.150 22:38:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:00.150 00:22:00.150 real 0m6.901s 00:22:00.150 user 0m9.704s 00:22:00.150 sys 0m1.295s 00:22:00.150 22:38:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.150 ************************************ 00:22:00.150 22:38:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.150 END TEST raid_superblock_test_md_interleaved 00:22:00.150 ************************************ 00:22:00.150 22:38:55 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:00.150 22:38:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:00.150 22:38:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:00.150 22:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:00.150 ************************************ 00:22:00.150 START TEST raid_rebuild_test_sb_md_interleaved 00:22:00.150 ************************************ 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=90238 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 90238 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 90238 ']' 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.150 22:38:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.150 [2024-09-27 22:38:56.024006] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:22:00.150 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:00.150 Zero copy mechanism will not be used. 00:22:00.150 [2024-09-27 22:38:56.024296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90238 ] 00:22:00.420 [2024-09-27 22:38:56.194076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.693 [2024-09-27 22:38:56.411179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.952 [2024-09-27 22:38:56.642627] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.952 [2024-09-27 22:38:56.642661] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 BaseBdev1_malloc 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 [2024-09-27 22:38:57.152423] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:01.521 [2024-09-27 22:38:57.152489] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.521 [2024-09-27 22:38:57.152517] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:01.521 [2024-09-27 22:38:57.152532] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.521 [2024-09-27 22:38:57.154855] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.521 [2024-09-27 22:38:57.154900] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:01.521 BaseBdev1 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 BaseBdev2_malloc 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 [2024-09-27 22:38:57.212576] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:01.521 [2024-09-27 22:38:57.212642] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.521 [2024-09-27 22:38:57.212663] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:01.521 [2024-09-27 22:38:57.212678] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.521 [2024-09-27 22:38:57.214785] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.521 [2024-09-27 22:38:57.214945] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:01.521 BaseBdev2 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 spare_malloc 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 spare_delay 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 [2024-09-27 22:38:57.284625] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.521 [2024-09-27 22:38:57.284684] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.521 [2024-09-27 22:38:57.284705] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:01.521 [2024-09-27 22:38:57.284719] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.521 [2024-09-27 22:38:57.286799] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.521 [2024-09-27 22:38:57.286841] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.521 spare 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.521 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 [2024-09-27 22:38:57.296663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.521 [2024-09-27 22:38:57.298706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.521 [2024-09-27 22:38:57.298927] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:01.521 [2024-09-27 22:38:57.298943] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:01.521 [2024-09-27 22:38:57.299046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:01.522 [2024-09-27 22:38:57.299128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:01.522 [2024-09-27 22:38:57.299137] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:01.522 [2024-09-27 22:38:57.299201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.522 "name": "raid_bdev1", 00:22:01.522 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:01.522 "strip_size_kb": 0, 00:22:01.522 "state": "online", 00:22:01.522 "raid_level": "raid1", 00:22:01.522 "superblock": true, 00:22:01.522 "num_base_bdevs": 2, 00:22:01.522 "num_base_bdevs_discovered": 2, 00:22:01.522 "num_base_bdevs_operational": 2, 00:22:01.522 "base_bdevs_list": [ 00:22:01.522 { 00:22:01.522 "name": "BaseBdev1", 00:22:01.522 "uuid": "0f1d6d18-651a-5c27-9156-77c155cb5032", 00:22:01.522 "is_configured": true, 00:22:01.522 "data_offset": 256, 00:22:01.522 "data_size": 7936 00:22:01.522 }, 00:22:01.522 { 00:22:01.522 "name": "BaseBdev2", 00:22:01.522 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:01.522 "is_configured": true, 00:22:01.522 "data_offset": 256, 00:22:01.522 "data_size": 7936 00:22:01.522 } 00:22:01.522 ] 00:22:01.522 }' 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.522 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 [2024-09-27 22:38:57.728393] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 [2024-09-27 22:38:57.808071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.090 "name": "raid_bdev1", 00:22:02.090 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:02.090 "strip_size_kb": 0, 00:22:02.090 "state": "online", 00:22:02.090 "raid_level": "raid1", 00:22:02.090 "superblock": true, 00:22:02.090 "num_base_bdevs": 2, 00:22:02.090 "num_base_bdevs_discovered": 1, 00:22:02.090 "num_base_bdevs_operational": 1, 00:22:02.090 "base_bdevs_list": [ 00:22:02.090 { 00:22:02.090 "name": null, 00:22:02.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.090 "is_configured": false, 00:22:02.090 "data_offset": 0, 00:22:02.090 "data_size": 7936 00:22:02.090 }, 00:22:02.090 { 00:22:02.090 "name": "BaseBdev2", 00:22:02.090 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:02.090 "is_configured": true, 00:22:02.090 "data_offset": 256, 00:22:02.090 "data_size": 7936 00:22:02.090 } 00:22:02.090 ] 00:22:02.090 }' 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.090 22:38:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.349 22:38:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:02.349 22:38:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.349 22:38:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.349 [2024-09-27 22:38:58.203548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.349 [2024-09-27 22:38:58.223136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:02.349 22:38:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.349 22:38:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:02.349 [2024-09-27 22:38:58.225193] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.728 "name": "raid_bdev1", 00:22:03.728 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:03.728 "strip_size_kb": 0, 00:22:03.728 "state": "online", 00:22:03.728 "raid_level": "raid1", 00:22:03.728 "superblock": true, 00:22:03.728 "num_base_bdevs": 2, 00:22:03.728 "num_base_bdevs_discovered": 2, 00:22:03.728 "num_base_bdevs_operational": 2, 00:22:03.728 "process": { 00:22:03.728 "type": "rebuild", 00:22:03.728 "target": "spare", 00:22:03.728 "progress": { 00:22:03.728 "blocks": 2560, 00:22:03.728 "percent": 32 00:22:03.728 } 00:22:03.728 }, 00:22:03.728 "base_bdevs_list": [ 00:22:03.728 { 00:22:03.728 "name": "spare", 00:22:03.728 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:03.728 "is_configured": true, 00:22:03.728 "data_offset": 256, 00:22:03.728 "data_size": 7936 00:22:03.728 }, 00:22:03.728 { 00:22:03.728 "name": "BaseBdev2", 00:22:03.728 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:03.728 "is_configured": true, 00:22:03.728 "data_offset": 256, 00:22:03.728 "data_size": 7936 00:22:03.728 } 00:22:03.728 ] 00:22:03.728 }' 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.728 [2024-09-27 22:38:59.372650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.728 [2024-09-27 22:38:59.430006] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:03.728 [2024-09-27 22:38:59.430069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.728 [2024-09-27 22:38:59.430084] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.728 [2024-09-27 22:38:59.430096] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.728 "name": "raid_bdev1", 00:22:03.728 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:03.728 "strip_size_kb": 0, 00:22:03.728 "state": "online", 00:22:03.728 "raid_level": "raid1", 00:22:03.728 "superblock": true, 00:22:03.728 "num_base_bdevs": 2, 00:22:03.728 "num_base_bdevs_discovered": 1, 00:22:03.728 "num_base_bdevs_operational": 1, 00:22:03.728 "base_bdevs_list": [ 00:22:03.728 { 00:22:03.728 "name": null, 00:22:03.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.728 "is_configured": false, 00:22:03.728 "data_offset": 0, 00:22:03.728 "data_size": 7936 00:22:03.728 }, 00:22:03.728 { 00:22:03.728 "name": "BaseBdev2", 00:22:03.728 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:03.728 "is_configured": true, 00:22:03.728 "data_offset": 256, 00:22:03.728 "data_size": 7936 00:22:03.728 } 00:22:03.728 ] 00:22:03.728 }' 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.728 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.296 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.296 "name": "raid_bdev1", 00:22:04.296 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:04.296 "strip_size_kb": 0, 00:22:04.296 "state": "online", 00:22:04.296 "raid_level": "raid1", 00:22:04.296 "superblock": true, 00:22:04.296 "num_base_bdevs": 2, 00:22:04.296 "num_base_bdevs_discovered": 1, 00:22:04.296 "num_base_bdevs_operational": 1, 00:22:04.296 "base_bdevs_list": [ 00:22:04.296 { 00:22:04.296 "name": null, 00:22:04.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.296 "is_configured": false, 00:22:04.296 "data_offset": 0, 00:22:04.296 "data_size": 7936 00:22:04.296 }, 00:22:04.296 { 00:22:04.296 "name": "BaseBdev2", 00:22:04.296 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:04.296 "is_configured": true, 00:22:04.296 "data_offset": 256, 00:22:04.296 "data_size": 7936 00:22:04.296 } 00:22:04.296 ] 00:22:04.296 }' 00:22:04.297 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.297 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:04.297 22:38:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.297 22:39:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:04.297 22:39:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:04.297 22:39:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.297 22:39:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.297 [2024-09-27 22:39:00.037110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.297 [2024-09-27 22:39:00.055299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:04.297 22:39:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.297 22:39:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:04.297 [2024-09-27 22:39:00.057479] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.233 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.493 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.493 "name": "raid_bdev1", 00:22:05.494 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:05.494 "strip_size_kb": 0, 00:22:05.494 "state": "online", 00:22:05.494 "raid_level": "raid1", 00:22:05.494 "superblock": true, 00:22:05.494 "num_base_bdevs": 2, 00:22:05.494 "num_base_bdevs_discovered": 2, 00:22:05.494 "num_base_bdevs_operational": 2, 00:22:05.494 "process": { 00:22:05.494 "type": "rebuild", 00:22:05.494 "target": "spare", 00:22:05.494 "progress": { 00:22:05.494 "blocks": 2560, 00:22:05.494 "percent": 32 00:22:05.494 } 00:22:05.494 }, 00:22:05.494 "base_bdevs_list": [ 00:22:05.494 { 00:22:05.494 "name": "spare", 00:22:05.494 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:05.494 "is_configured": true, 00:22:05.494 "data_offset": 256, 00:22:05.494 "data_size": 7936 00:22:05.494 }, 00:22:05.494 { 00:22:05.494 "name": "BaseBdev2", 00:22:05.494 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:05.494 "is_configured": true, 00:22:05.494 "data_offset": 256, 00:22:05.494 "data_size": 7936 00:22:05.494 } 00:22:05.494 ] 00:22:05.494 }' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:05.494 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=845 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.494 "name": "raid_bdev1", 00:22:05.494 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:05.494 "strip_size_kb": 0, 00:22:05.494 "state": "online", 00:22:05.494 "raid_level": "raid1", 00:22:05.494 "superblock": true, 00:22:05.494 "num_base_bdevs": 2, 00:22:05.494 "num_base_bdevs_discovered": 2, 00:22:05.494 "num_base_bdevs_operational": 2, 00:22:05.494 "process": { 00:22:05.494 "type": "rebuild", 00:22:05.494 "target": "spare", 00:22:05.494 "progress": { 00:22:05.494 "blocks": 2816, 00:22:05.494 "percent": 35 00:22:05.494 } 00:22:05.494 }, 00:22:05.494 "base_bdevs_list": [ 00:22:05.494 { 00:22:05.494 "name": "spare", 00:22:05.494 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:05.494 "is_configured": true, 00:22:05.494 "data_offset": 256, 00:22:05.494 "data_size": 7936 00:22:05.494 }, 00:22:05.494 { 00:22:05.494 "name": "BaseBdev2", 00:22:05.494 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:05.494 "is_configured": true, 00:22:05.494 "data_offset": 256, 00:22:05.494 "data_size": 7936 00:22:05.494 } 00:22:05.494 ] 00:22:05.494 }' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.494 22:39:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.883 "name": "raid_bdev1", 00:22:06.883 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:06.883 "strip_size_kb": 0, 00:22:06.883 "state": "online", 00:22:06.883 "raid_level": "raid1", 00:22:06.883 "superblock": true, 00:22:06.883 "num_base_bdevs": 2, 00:22:06.883 "num_base_bdevs_discovered": 2, 00:22:06.883 "num_base_bdevs_operational": 2, 00:22:06.883 "process": { 00:22:06.883 "type": "rebuild", 00:22:06.883 "target": "spare", 00:22:06.883 "progress": { 00:22:06.883 "blocks": 5632, 00:22:06.883 "percent": 70 00:22:06.883 } 00:22:06.883 }, 00:22:06.883 "base_bdevs_list": [ 00:22:06.883 { 00:22:06.883 "name": "spare", 00:22:06.883 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:06.883 "is_configured": true, 00:22:06.883 "data_offset": 256, 00:22:06.883 "data_size": 7936 00:22:06.883 }, 00:22:06.883 { 00:22:06.883 "name": "BaseBdev2", 00:22:06.883 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:06.883 "is_configured": true, 00:22:06.883 "data_offset": 256, 00:22:06.883 "data_size": 7936 00:22:06.883 } 00:22:06.883 ] 00:22:06.883 }' 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.883 22:39:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:07.451 [2024-09-27 22:39:03.169707] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:07.451 [2024-09-27 22:39:03.169780] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:07.451 [2024-09-27 22:39:03.169882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.711 "name": "raid_bdev1", 00:22:07.711 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:07.711 "strip_size_kb": 0, 00:22:07.711 "state": "online", 00:22:07.711 "raid_level": "raid1", 00:22:07.711 "superblock": true, 00:22:07.711 "num_base_bdevs": 2, 00:22:07.711 "num_base_bdevs_discovered": 2, 00:22:07.711 "num_base_bdevs_operational": 2, 00:22:07.711 "base_bdevs_list": [ 00:22:07.711 { 00:22:07.711 "name": "spare", 00:22:07.711 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:07.711 "is_configured": true, 00:22:07.711 "data_offset": 256, 00:22:07.711 "data_size": 7936 00:22:07.711 }, 00:22:07.711 { 00:22:07.711 "name": "BaseBdev2", 00:22:07.711 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:07.711 "is_configured": true, 00:22:07.711 "data_offset": 256, 00:22:07.711 "data_size": 7936 00:22:07.711 } 00:22:07.711 ] 00:22:07.711 }' 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.711 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.971 "name": "raid_bdev1", 00:22:07.971 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:07.971 "strip_size_kb": 0, 00:22:07.971 "state": "online", 00:22:07.971 "raid_level": "raid1", 00:22:07.971 "superblock": true, 00:22:07.971 "num_base_bdevs": 2, 00:22:07.971 "num_base_bdevs_discovered": 2, 00:22:07.971 "num_base_bdevs_operational": 2, 00:22:07.971 "base_bdevs_list": [ 00:22:07.971 { 00:22:07.971 "name": "spare", 00:22:07.971 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:07.971 "is_configured": true, 00:22:07.971 "data_offset": 256, 00:22:07.971 "data_size": 7936 00:22:07.971 }, 00:22:07.971 { 00:22:07.971 "name": "BaseBdev2", 00:22:07.971 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:07.971 "is_configured": true, 00:22:07.971 "data_offset": 256, 00:22:07.971 "data_size": 7936 00:22:07.971 } 00:22:07.971 ] 00:22:07.971 }' 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.971 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.972 "name": "raid_bdev1", 00:22:07.972 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:07.972 "strip_size_kb": 0, 00:22:07.972 "state": "online", 00:22:07.972 "raid_level": "raid1", 00:22:07.972 "superblock": true, 00:22:07.972 "num_base_bdevs": 2, 00:22:07.972 "num_base_bdevs_discovered": 2, 00:22:07.972 "num_base_bdevs_operational": 2, 00:22:07.972 "base_bdevs_list": [ 00:22:07.972 { 00:22:07.972 "name": "spare", 00:22:07.972 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:07.972 "is_configured": true, 00:22:07.972 "data_offset": 256, 00:22:07.972 "data_size": 7936 00:22:07.972 }, 00:22:07.972 { 00:22:07.972 "name": "BaseBdev2", 00:22:07.972 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:07.972 "is_configured": true, 00:22:07.972 "data_offset": 256, 00:22:07.972 "data_size": 7936 00:22:07.972 } 00:22:07.972 ] 00:22:07.972 }' 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.972 22:39:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.541 [2024-09-27 22:39:04.150005] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.541 [2024-09-27 22:39:04.150038] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.541 [2024-09-27 22:39:04.150123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.541 [2024-09-27 22:39:04.150192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.541 [2024-09-27 22:39:04.150204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.541 [2024-09-27 22:39:04.217894] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:08.541 [2024-09-27 22:39:04.218087] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.541 [2024-09-27 22:39:04.218121] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:08.541 [2024-09-27 22:39:04.218132] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.541 [2024-09-27 22:39:04.220339] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.541 [2024-09-27 22:39:04.220378] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:08.541 [2024-09-27 22:39:04.220440] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:08.541 [2024-09-27 22:39:04.220505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:08.541 [2024-09-27 22:39:04.220613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:08.541 spare 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.541 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.542 [2024-09-27 22:39:04.320538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:08.542 [2024-09-27 22:39:04.320582] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:08.542 [2024-09-27 22:39:04.320701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:08.542 [2024-09-27 22:39:04.320816] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:08.542 [2024-09-27 22:39:04.320826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:08.542 [2024-09-27 22:39:04.320925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.542 "name": "raid_bdev1", 00:22:08.542 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:08.542 "strip_size_kb": 0, 00:22:08.542 "state": "online", 00:22:08.542 "raid_level": "raid1", 00:22:08.542 "superblock": true, 00:22:08.542 "num_base_bdevs": 2, 00:22:08.542 "num_base_bdevs_discovered": 2, 00:22:08.542 "num_base_bdevs_operational": 2, 00:22:08.542 "base_bdevs_list": [ 00:22:08.542 { 00:22:08.542 "name": "spare", 00:22:08.542 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:08.542 "is_configured": true, 00:22:08.542 "data_offset": 256, 00:22:08.542 "data_size": 7936 00:22:08.542 }, 00:22:08.542 { 00:22:08.542 "name": "BaseBdev2", 00:22:08.542 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:08.542 "is_configured": true, 00:22:08.542 "data_offset": 256, 00:22:08.542 "data_size": 7936 00:22:08.542 } 00:22:08.542 ] 00:22:08.542 }' 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.542 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.111 "name": "raid_bdev1", 00:22:09.111 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:09.111 "strip_size_kb": 0, 00:22:09.111 "state": "online", 00:22:09.111 "raid_level": "raid1", 00:22:09.111 "superblock": true, 00:22:09.111 "num_base_bdevs": 2, 00:22:09.111 "num_base_bdevs_discovered": 2, 00:22:09.111 "num_base_bdevs_operational": 2, 00:22:09.111 "base_bdevs_list": [ 00:22:09.111 { 00:22:09.111 "name": "spare", 00:22:09.111 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:09.111 "is_configured": true, 00:22:09.111 "data_offset": 256, 00:22:09.111 "data_size": 7936 00:22:09.111 }, 00:22:09.111 { 00:22:09.111 "name": "BaseBdev2", 00:22:09.111 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:09.111 "is_configured": true, 00:22:09.111 "data_offset": 256, 00:22:09.111 "data_size": 7936 00:22:09.111 } 00:22:09.111 ] 00:22:09.111 }' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.111 [2024-09-27 22:39:04.908971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.111 "name": "raid_bdev1", 00:22:09.111 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:09.111 "strip_size_kb": 0, 00:22:09.111 "state": "online", 00:22:09.111 "raid_level": "raid1", 00:22:09.111 "superblock": true, 00:22:09.111 "num_base_bdevs": 2, 00:22:09.111 "num_base_bdevs_discovered": 1, 00:22:09.111 "num_base_bdevs_operational": 1, 00:22:09.111 "base_bdevs_list": [ 00:22:09.111 { 00:22:09.111 "name": null, 00:22:09.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.111 "is_configured": false, 00:22:09.111 "data_offset": 0, 00:22:09.111 "data_size": 7936 00:22:09.111 }, 00:22:09.111 { 00:22:09.111 "name": "BaseBdev2", 00:22:09.111 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:09.111 "is_configured": true, 00:22:09.111 "data_offset": 256, 00:22:09.111 "data_size": 7936 00:22:09.111 } 00:22:09.111 ] 00:22:09.111 }' 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.111 22:39:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.680 22:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.680 22:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.680 22:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:09.680 [2024-09-27 22:39:05.312446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.680 [2024-09-27 22:39:05.312627] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:09.680 [2024-09-27 22:39:05.312645] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:09.680 [2024-09-27 22:39:05.312686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.680 [2024-09-27 22:39:05.331038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:09.680 22:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.680 22:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:09.680 [2024-09-27 22:39:05.333152] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.618 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.619 "name": "raid_bdev1", 00:22:10.619 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:10.619 "strip_size_kb": 0, 00:22:10.619 "state": "online", 00:22:10.619 "raid_level": "raid1", 00:22:10.619 "superblock": true, 00:22:10.619 "num_base_bdevs": 2, 00:22:10.619 "num_base_bdevs_discovered": 2, 00:22:10.619 "num_base_bdevs_operational": 2, 00:22:10.619 "process": { 00:22:10.619 "type": "rebuild", 00:22:10.619 "target": "spare", 00:22:10.619 "progress": { 00:22:10.619 "blocks": 2560, 00:22:10.619 "percent": 32 00:22:10.619 } 00:22:10.619 }, 00:22:10.619 "base_bdevs_list": [ 00:22:10.619 { 00:22:10.619 "name": "spare", 00:22:10.619 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:10.619 "is_configured": true, 00:22:10.619 "data_offset": 256, 00:22:10.619 "data_size": 7936 00:22:10.619 }, 00:22:10.619 { 00:22:10.619 "name": "BaseBdev2", 00:22:10.619 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:10.619 "is_configured": true, 00:22:10.619 "data_offset": 256, 00:22:10.619 "data_size": 7936 00:22:10.619 } 00:22:10.619 ] 00:22:10.619 }' 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.619 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.619 [2024-09-27 22:39:06.469091] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.879 [2024-09-27 22:39:06.537960] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:10.879 [2024-09-27 22:39:06.538044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.879 [2024-09-27 22:39:06.538060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.879 [2024-09-27 22:39:06.538075] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.879 "name": "raid_bdev1", 00:22:10.879 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:10.879 "strip_size_kb": 0, 00:22:10.879 "state": "online", 00:22:10.879 "raid_level": "raid1", 00:22:10.879 "superblock": true, 00:22:10.879 "num_base_bdevs": 2, 00:22:10.879 "num_base_bdevs_discovered": 1, 00:22:10.879 "num_base_bdevs_operational": 1, 00:22:10.879 "base_bdevs_list": [ 00:22:10.879 { 00:22:10.879 "name": null, 00:22:10.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.879 "is_configured": false, 00:22:10.879 "data_offset": 0, 00:22:10.879 "data_size": 7936 00:22:10.879 }, 00:22:10.879 { 00:22:10.879 "name": "BaseBdev2", 00:22:10.879 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:10.879 "is_configured": true, 00:22:10.879 "data_offset": 256, 00:22:10.879 "data_size": 7936 00:22:10.879 } 00:22:10.879 ] 00:22:10.879 }' 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.879 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.138 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:11.138 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.138 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:11.138 [2024-09-27 22:39:06.961173] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:11.138 [2024-09-27 22:39:06.961239] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.138 [2024-09-27 22:39:06.961264] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:11.138 [2024-09-27 22:39:06.961278] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.138 [2024-09-27 22:39:06.961469] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.138 [2024-09-27 22:39:06.961489] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:11.138 [2024-09-27 22:39:06.961548] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:11.138 [2024-09-27 22:39:06.961563] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:11.138 [2024-09-27 22:39:06.961574] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:11.138 [2024-09-27 22:39:06.961598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:11.138 [2024-09-27 22:39:06.980320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:11.138 spare 00:22:11.138 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.138 22:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:11.138 [2024-09-27 22:39:06.982515] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:12.522 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.522 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.523 22:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.523 "name": "raid_bdev1", 00:22:12.523 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:12.523 "strip_size_kb": 0, 00:22:12.523 "state": "online", 00:22:12.523 "raid_level": "raid1", 00:22:12.523 "superblock": true, 00:22:12.523 "num_base_bdevs": 2, 00:22:12.523 "num_base_bdevs_discovered": 2, 00:22:12.523 "num_base_bdevs_operational": 2, 00:22:12.523 "process": { 00:22:12.523 "type": "rebuild", 00:22:12.523 "target": "spare", 00:22:12.523 "progress": { 00:22:12.523 "blocks": 2560, 00:22:12.523 "percent": 32 00:22:12.523 } 00:22:12.523 }, 00:22:12.523 "base_bdevs_list": [ 00:22:12.523 { 00:22:12.523 "name": "spare", 00:22:12.523 "uuid": "88f9139d-5bba-5244-8e1f-f214568b7347", 00:22:12.523 "is_configured": true, 00:22:12.523 "data_offset": 256, 00:22:12.523 "data_size": 7936 00:22:12.523 }, 00:22:12.523 { 00:22:12.523 "name": "BaseBdev2", 00:22:12.523 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:12.523 "is_configured": true, 00:22:12.523 "data_offset": 256, 00:22:12.523 "data_size": 7936 00:22:12.523 } 00:22:12.523 ] 00:22:12.523 }' 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.523 [2024-09-27 22:39:08.114144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.523 [2024-09-27 22:39:08.187467] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:12.523 [2024-09-27 22:39:08.187694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.523 [2024-09-27 22:39:08.187787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.523 [2024-09-27 22:39:08.187825] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.523 "name": "raid_bdev1", 00:22:12.523 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:12.523 "strip_size_kb": 0, 00:22:12.523 "state": "online", 00:22:12.523 "raid_level": "raid1", 00:22:12.523 "superblock": true, 00:22:12.523 "num_base_bdevs": 2, 00:22:12.523 "num_base_bdevs_discovered": 1, 00:22:12.523 "num_base_bdevs_operational": 1, 00:22:12.523 "base_bdevs_list": [ 00:22:12.523 { 00:22:12.523 "name": null, 00:22:12.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.523 "is_configured": false, 00:22:12.523 "data_offset": 0, 00:22:12.523 "data_size": 7936 00:22:12.523 }, 00:22:12.523 { 00:22:12.523 "name": "BaseBdev2", 00:22:12.523 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:12.523 "is_configured": true, 00:22:12.523 "data_offset": 256, 00:22:12.523 "data_size": 7936 00:22:12.523 } 00:22:12.523 ] 00:22:12.523 }' 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.523 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.783 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.042 "name": "raid_bdev1", 00:22:13.042 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:13.042 "strip_size_kb": 0, 00:22:13.042 "state": "online", 00:22:13.042 "raid_level": "raid1", 00:22:13.042 "superblock": true, 00:22:13.042 "num_base_bdevs": 2, 00:22:13.042 "num_base_bdevs_discovered": 1, 00:22:13.042 "num_base_bdevs_operational": 1, 00:22:13.042 "base_bdevs_list": [ 00:22:13.042 { 00:22:13.042 "name": null, 00:22:13.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.042 "is_configured": false, 00:22:13.042 "data_offset": 0, 00:22:13.042 "data_size": 7936 00:22:13.042 }, 00:22:13.042 { 00:22:13.042 "name": "BaseBdev2", 00:22:13.042 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:13.042 "is_configured": true, 00:22:13.042 "data_offset": 256, 00:22:13.042 "data_size": 7936 00:22:13.042 } 00:22:13.042 ] 00:22:13.042 }' 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.042 [2024-09-27 22:39:08.771245] vbdev_passthru.c: 687:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:13.042 [2024-09-27 22:39:08.771307] vbdev_passthru.c: 715:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.042 [2024-09-27 22:39:08.771331] vbdev_passthru.c: 762:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:13.042 [2024-09-27 22:39:08.771343] vbdev_passthru.c: 777:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.042 [2024-09-27 22:39:08.771510] vbdev_passthru.c: 790:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.042 [2024-09-27 22:39:08.771523] vbdev_passthru.c: 791:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:13.042 [2024-09-27 22:39:08.771575] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:13.042 [2024-09-27 22:39:08.771588] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:13.042 [2024-09-27 22:39:08.771600] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:13.042 [2024-09-27 22:39:08.771612] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:13.042 BaseBdev1 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.042 22:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.978 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.978 "name": "raid_bdev1", 00:22:13.978 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:13.978 "strip_size_kb": 0, 00:22:13.978 "state": "online", 00:22:13.978 "raid_level": "raid1", 00:22:13.978 "superblock": true, 00:22:13.978 "num_base_bdevs": 2, 00:22:13.978 "num_base_bdevs_discovered": 1, 00:22:13.978 "num_base_bdevs_operational": 1, 00:22:13.978 "base_bdevs_list": [ 00:22:13.978 { 00:22:13.978 "name": null, 00:22:13.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.978 "is_configured": false, 00:22:13.978 "data_offset": 0, 00:22:13.978 "data_size": 7936 00:22:13.978 }, 00:22:13.978 { 00:22:13.978 "name": "BaseBdev2", 00:22:13.979 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:13.979 "is_configured": true, 00:22:13.979 "data_offset": 256, 00:22:13.979 "data_size": 7936 00:22:13.979 } 00:22:13.979 ] 00:22:13.979 }' 00:22:13.979 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.979 22:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.546 "name": "raid_bdev1", 00:22:14.546 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:14.546 "strip_size_kb": 0, 00:22:14.546 "state": "online", 00:22:14.546 "raid_level": "raid1", 00:22:14.546 "superblock": true, 00:22:14.546 "num_base_bdevs": 2, 00:22:14.546 "num_base_bdevs_discovered": 1, 00:22:14.546 "num_base_bdevs_operational": 1, 00:22:14.546 "base_bdevs_list": [ 00:22:14.546 { 00:22:14.546 "name": null, 00:22:14.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.546 "is_configured": false, 00:22:14.546 "data_offset": 0, 00:22:14.546 "data_size": 7936 00:22:14.546 }, 00:22:14.546 { 00:22:14.546 "name": "BaseBdev2", 00:22:14.546 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:14.546 "is_configured": true, 00:22:14.546 "data_offset": 256, 00:22:14.546 "data_size": 7936 00:22:14.546 } 00:22:14.546 ] 00:22:14.546 }' 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:14.546 [2024-09-27 22:39:10.305146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:14.546 [2024-09-27 22:39:10.305303] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:14.546 [2024-09-27 22:39:10.305323] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:14.546 request: 00:22:14.546 { 00:22:14.546 "base_bdev": "BaseBdev1", 00:22:14.546 "raid_bdev": "raid_bdev1", 00:22:14.546 "method": "bdev_raid_add_base_bdev", 00:22:14.546 "req_id": 1 00:22:14.546 } 00:22:14.546 Got JSON-RPC error response 00:22:14.546 response: 00:22:14.546 { 00:22:14.546 "code": -22, 00:22:14.546 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:14.546 } 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.546 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.547 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.547 22:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:15.483 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.743 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.743 "name": "raid_bdev1", 00:22:15.743 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:15.743 "strip_size_kb": 0, 00:22:15.743 "state": "online", 00:22:15.743 "raid_level": "raid1", 00:22:15.743 "superblock": true, 00:22:15.743 "num_base_bdevs": 2, 00:22:15.743 "num_base_bdevs_discovered": 1, 00:22:15.743 "num_base_bdevs_operational": 1, 00:22:15.743 "base_bdevs_list": [ 00:22:15.743 { 00:22:15.743 "name": null, 00:22:15.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.743 "is_configured": false, 00:22:15.743 "data_offset": 0, 00:22:15.743 "data_size": 7936 00:22:15.743 }, 00:22:15.743 { 00:22:15.743 "name": "BaseBdev2", 00:22:15.743 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:15.743 "is_configured": true, 00:22:15.743 "data_offset": 256, 00:22:15.743 "data_size": 7936 00:22:15.743 } 00:22:15.743 ] 00:22:15.743 }' 00:22:15.743 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.743 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.002 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.002 "name": "raid_bdev1", 00:22:16.002 "uuid": "f32ad342-d68d-4d09-97ec-6102bcc92307", 00:22:16.002 "strip_size_kb": 0, 00:22:16.002 "state": "online", 00:22:16.002 "raid_level": "raid1", 00:22:16.002 "superblock": true, 00:22:16.002 "num_base_bdevs": 2, 00:22:16.002 "num_base_bdevs_discovered": 1, 00:22:16.002 "num_base_bdevs_operational": 1, 00:22:16.002 "base_bdevs_list": [ 00:22:16.002 { 00:22:16.002 "name": null, 00:22:16.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.003 "is_configured": false, 00:22:16.003 "data_offset": 0, 00:22:16.003 "data_size": 7936 00:22:16.003 }, 00:22:16.003 { 00:22:16.003 "name": "BaseBdev2", 00:22:16.003 "uuid": "b390474e-3bc9-55de-b6b3-daea4d45b1de", 00:22:16.003 "is_configured": true, 00:22:16.003 "data_offset": 256, 00:22:16.003 "data_size": 7936 00:22:16.003 } 00:22:16.003 ] 00:22:16.003 }' 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 90238 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 90238 ']' 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 90238 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.003 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90238 00:22:16.263 killing process with pid 90238 00:22:16.263 Received shutdown signal, test time was about 60.000000 seconds 00:22:16.263 00:22:16.263 Latency(us) 00:22:16.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.263 =================================================================================================================== 00:22:16.263 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.263 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:16.263 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:16.263 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90238' 00:22:16.263 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 90238 00:22:16.263 [2024-09-27 22:39:11.893911] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:16.263 [2024-09-27 22:39:11.894044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.263 22:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 90238 00:22:16.263 [2024-09-27 22:39:11.894090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.263 [2024-09-27 22:39:11.894105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:16.522 [2024-09-27 22:39:12.205461] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.432 22:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:22:18.432 00:22:18.432 real 0m18.205s 00:22:18.432 user 0m23.186s 00:22:18.432 sys 0m1.824s 00:22:18.432 22:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.432 22:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 ************************************ 00:22:18.432 END TEST raid_rebuild_test_sb_md_interleaved 00:22:18.432 ************************************ 00:22:18.432 22:39:14 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:22:18.432 22:39:14 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:22:18.432 22:39:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 90238 ']' 00:22:18.432 22:39:14 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 90238 00:22:18.432 22:39:14 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:22:18.432 ************************************ 00:22:18.432 END TEST bdev_raid 00:22:18.432 ************************************ 00:22:18.432 00:22:18.432 real 13m47.296s 00:22:18.432 user 17m50.091s 00:22:18.432 sys 2m17.153s 00:22:18.432 22:39:14 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.432 22:39:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 22:39:14 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:18.432 22:39:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:18.432 22:39:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.432 22:39:14 -- common/autotest_common.sh@10 -- # set +x 00:22:18.691 ************************************ 00:22:18.691 START TEST spdkcli_raid 00:22:18.691 ************************************ 00:22:18.691 22:39:14 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:18.692 * Looking for test storage... 00:22:18.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.692 22:39:14 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.692 --rc genhtml_branch_coverage=1 00:22:18.692 --rc genhtml_function_coverage=1 00:22:18.692 --rc genhtml_legend=1 00:22:18.692 --rc geninfo_all_blocks=1 00:22:18.692 --rc geninfo_unexecuted_blocks=1 00:22:18.692 00:22:18.692 ' 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.692 --rc genhtml_branch_coverage=1 00:22:18.692 --rc genhtml_function_coverage=1 00:22:18.692 --rc genhtml_legend=1 00:22:18.692 --rc geninfo_all_blocks=1 00:22:18.692 --rc geninfo_unexecuted_blocks=1 00:22:18.692 00:22:18.692 ' 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.692 --rc genhtml_branch_coverage=1 00:22:18.692 --rc genhtml_function_coverage=1 00:22:18.692 --rc genhtml_legend=1 00:22:18.692 --rc geninfo_all_blocks=1 00:22:18.692 --rc geninfo_unexecuted_blocks=1 00:22:18.692 00:22:18.692 ' 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:18.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.692 --rc genhtml_branch_coverage=1 00:22:18.692 --rc genhtml_function_coverage=1 00:22:18.692 --rc genhtml_legend=1 00:22:18.692 --rc geninfo_all_blocks=1 00:22:18.692 --rc geninfo_unexecuted_blocks=1 00:22:18.692 00:22:18.692 ' 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:18.692 22:39:14 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:22:18.692 22:39:14 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.692 22:39:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.952 22:39:14 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:22:18.952 22:39:14 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90929 00:22:18.952 22:39:14 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:18.952 22:39:14 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90929 00:22:18.952 22:39:14 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90929 ']' 00:22:18.952 22:39:14 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.952 22:39:14 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.952 22:39:14 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.952 22:39:14 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.952 22:39:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.952 [2024-09-27 22:39:14.678880] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:22:18.952 [2024-09-27 22:39:14.679222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90929 ] 00:22:19.213 [2024-09-27 22:39:14.845544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:19.213 [2024-09-27 22:39:15.078279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.213 [2024-09-27 22:39:15.078314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.600 22:39:16 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.600 22:39:16 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:22:20.600 22:39:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:20.600 22:39:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.600 22:39:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 22:39:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:20.600 22:39:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.600 22:39:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:20.600 22:39:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:20.600 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:20.600 ' 00:22:22.516 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:22.516 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:22.516 22:39:18 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:22.516 22:39:18 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:22.516 22:39:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.516 22:39:18 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:22.516 22:39:18 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.516 22:39:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.516 22:39:18 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:22.516 ' 00:22:23.455 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:23.455 22:39:19 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:23.455 22:39:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.455 22:39:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.715 22:39:19 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:23.715 22:39:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.715 22:39:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.715 22:39:19 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:23.715 22:39:19 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:24.286 22:39:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:24.286 22:39:19 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:24.286 22:39:19 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:24.286 22:39:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.286 22:39:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.286 22:39:19 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:24.286 22:39:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:24.286 22:39:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.286 22:39:19 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:24.286 ' 00:22:25.224 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:25.224 22:39:21 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:25.224 22:39:21 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.224 22:39:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:25.224 22:39:21 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:25.224 22:39:21 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.224 22:39:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:25.483 22:39:21 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:25.483 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:25.483 ' 00:22:26.858 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:26.858 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:26.858 22:39:22 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.858 22:39:22 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90929 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90929 ']' 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90929 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90929 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90929' 00:22:26.858 killing process with pid 90929 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90929 00:22:26.858 22:39:22 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90929 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90929 ']' 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90929 00:22:30.142 22:39:25 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90929 ']' 00:22:30.142 Process with pid 90929 is not found 00:22:30.142 22:39:25 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90929 00:22:30.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90929) - No such process 00:22:30.142 22:39:25 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90929 is not found' 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:30.142 22:39:25 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:30.142 00:22:30.142 real 0m11.630s 00:22:30.142 user 0m23.378s 00:22:30.142 sys 0m1.287s 00:22:30.142 22:39:25 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.142 22:39:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:30.142 ************************************ 00:22:30.142 END TEST spdkcli_raid 00:22:30.142 ************************************ 00:22:30.142 22:39:26 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:30.142 22:39:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:30.142 22:39:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.142 22:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:30.142 ************************************ 00:22:30.142 START TEST blockdev_raid5f 00:22:30.142 ************************************ 00:22:30.142 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:30.401 * Looking for test storage... 00:22:30.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:30.401 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:30.401 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:22:30.401 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:30.401 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:30.401 22:39:26 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.402 22:39:26 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:30.402 22:39:26 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.402 22:39:26 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.402 22:39:26 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.402 22:39:26 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:30.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.402 --rc genhtml_branch_coverage=1 00:22:30.402 --rc genhtml_function_coverage=1 00:22:30.402 --rc genhtml_legend=1 00:22:30.402 --rc geninfo_all_blocks=1 00:22:30.402 --rc geninfo_unexecuted_blocks=1 00:22:30.402 00:22:30.402 ' 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:30.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.402 --rc genhtml_branch_coverage=1 00:22:30.402 --rc genhtml_function_coverage=1 00:22:30.402 --rc genhtml_legend=1 00:22:30.402 --rc geninfo_all_blocks=1 00:22:30.402 --rc geninfo_unexecuted_blocks=1 00:22:30.402 00:22:30.402 ' 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:30.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.402 --rc genhtml_branch_coverage=1 00:22:30.402 --rc genhtml_function_coverage=1 00:22:30.402 --rc genhtml_legend=1 00:22:30.402 --rc geninfo_all_blocks=1 00:22:30.402 --rc geninfo_unexecuted_blocks=1 00:22:30.402 00:22:30.402 ' 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:30.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.402 --rc genhtml_branch_coverage=1 00:22:30.402 --rc genhtml_function_coverage=1 00:22:30.402 --rc genhtml_legend=1 00:22:30.402 --rc geninfo_all_blocks=1 00:22:30.402 --rc geninfo_unexecuted_blocks=1 00:22:30.402 00:22:30.402 ' 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_1=Malloc_0 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_DEV_2=Null_1 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@673 -- # QOS_RUN_TIME=5 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@675 -- # uname -s 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@675 -- # '[' Linux = Linux ']' 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=0 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@683 -- # test_type=raid5f 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@684 -- # crypto_device= 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@685 -- # dek= 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@686 -- # env_ctx= 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@687 -- # wait_for_rpc= 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@688 -- # '[' -n '' ']' 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@691 -- # [[ raid5f == bdev ]] 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@691 -- # [[ raid5f == crypto_* ]] 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@694 -- # start_spdk_tgt 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=91222 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:30.402 22:39:26 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 91222 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 91222 ']' 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.402 22:39:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:30.660 [2024-09-27 22:39:26.376639] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:22:30.660 [2024-09-27 22:39:26.377474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91222 ] 00:22:30.919 [2024-09-27 22:39:26.547314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.919 [2024-09-27 22:39:26.765426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.295 22:39:28 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.295 22:39:28 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:22:32.295 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@695 -- # case "$test_type" in 00:22:32.295 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@727 -- # setup_raid5f_conf 00:22:32.295 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@281 -- # rpc_cmd 00:22:32.295 22:39:28 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.295 22:39:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.295 Malloc0 00:22:32.295 Malloc1 00:22:32.563 Malloc2 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@738 -- # rpc_cmd bdev_wait_for_examine 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@741 -- # cat 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@741 -- # rpc_cmd save_subsystem_config -n accel 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@741 -- # rpc_cmd save_subsystem_config -n bdev 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@741 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@749 -- # rpc_cmd bdev_get_bdevs 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r '.[] | select(.claimed == false)' 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@750 -- # mapfile -t bdevs_name 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@750 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cda61766-5b21-4bea-8156-14c21d0ef954"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cda61766-5b21-4bea-8156-14c21d0ef954",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cda61766-5b21-4bea-8156-14c21d0ef954",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1179b394-9177-4b46-8c34-65408bebae87",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7f0f980e-248a-4342-aeae-b0741fc7874e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ee57fde1-eb83-498d-ad94-84cb848e0b99",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@750 -- # jq -r .name 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@751 -- # bdev_list=("${bdevs_name[@]}") 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@753 -- # hello_world_bdev=raid5f 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@754 -- # trap - SIGINT SIGTERM EXIT 00:22:32.563 22:39:28 blockdev_raid5f -- bdev/blockdev.sh@755 -- # killprocess 91222 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 91222 ']' 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 91222 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.563 22:39:28 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91222 00:22:32.835 killing process with pid 91222 00:22:32.835 22:39:28 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.835 22:39:28 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.835 22:39:28 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91222' 00:22:32.835 22:39:28 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 91222 00:22:32.835 22:39:28 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 91222 00:22:37.026 22:39:32 blockdev_raid5f -- bdev/blockdev.sh@759 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:37.026 22:39:32 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:37.026 22:39:32 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:37.026 22:39:32 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.026 22:39:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:37.026 ************************************ 00:22:37.026 START TEST bdev_hello_world 00:22:37.026 ************************************ 00:22:37.027 22:39:32 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:37.027 [2024-09-27 22:39:32.103130] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:22:37.027 [2024-09-27 22:39:32.103247] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91299 ] 00:22:37.027 [2024-09-27 22:39:32.275021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.027 [2024-09-27 22:39:32.499753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.596 [2024-09-27 22:39:33.377727] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:37.596 [2024-09-27 22:39:33.377778] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:37.596 [2024-09-27 22:39:33.377797] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:37.596 [2024-09-27 22:39:33.378325] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:37.596 [2024-09-27 22:39:33.378485] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:37.596 [2024-09-27 22:39:33.378504] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:37.596 [2024-09-27 22:39:33.378553] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:37.596 00:22:37.596 [2024-09-27 22:39:33.378572] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:40.183 00:22:40.183 real 0m3.638s 00:22:40.183 user 0m3.164s 00:22:40.183 sys 0m0.348s 00:22:40.183 ************************************ 00:22:40.183 END TEST bdev_hello_world 00:22:40.183 ************************************ 00:22:40.183 22:39:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:40.183 22:39:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:40.183 22:39:35 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_bounds bdev_bounds '' 00:22:40.183 22:39:35 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:40.183 22:39:35 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:40.183 22:39:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:40.183 ************************************ 00:22:40.183 START TEST bdev_bounds 00:22:40.183 ************************************ 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # bdevio_pid=91359 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:40.183 Process bdevio pid: 91359 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # echo 'Process bdevio pid: 91359' 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # waitforlisten 91359 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 91359 ']' 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.183 22:39:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:40.183 [2024-09-27 22:39:35.814278] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:22:40.183 [2024-09-27 22:39:35.814603] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91359 ] 00:22:40.183 [2024-09-27 22:39:35.986520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.442 [2024-09-27 22:39:36.220101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.442 [2024-09-27 22:39:36.220175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.442 [2024-09-27 22:39:36.220147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.378 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.378 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:22:41.378 22:39:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:41.378 I/O targets: 00:22:41.378 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:41.378 00:22:41.378 00:22:41.378 CUnit - A unit testing framework for C - Version 2.1-3 00:22:41.378 http://cunit.sourceforge.net/ 00:22:41.378 00:22:41.378 00:22:41.378 Suite: bdevio tests on: raid5f 00:22:41.378 Test: blockdev write read block ...passed 00:22:41.378 Test: blockdev write zeroes read block ...passed 00:22:41.637 Test: blockdev write zeroes read no split ...passed 00:22:41.637 Test: blockdev write zeroes read split ...passed 00:22:41.637 Test: blockdev write zeroes read split partial ...passed 00:22:41.637 Test: blockdev reset ...passed 00:22:41.637 Test: blockdev write read 8 blocks ...passed 00:22:41.637 Test: blockdev write read size > 128k ...passed 00:22:41.637 Test: blockdev write read invalid size ...passed 00:22:41.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:41.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:41.637 Test: blockdev write read max offset ...passed 00:22:41.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:41.637 Test: blockdev writev readv 8 blocks ...passed 00:22:41.637 Test: blockdev writev readv 30 x 1block ...passed 00:22:41.637 Test: blockdev writev readv block ...passed 00:22:41.637 Test: blockdev writev readv size > 128k ...passed 00:22:41.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:41.637 Test: blockdev comparev and writev ...passed 00:22:41.637 Test: blockdev nvme passthru rw ...passed 00:22:41.637 Test: blockdev nvme passthru vendor specific ...passed 00:22:41.637 Test: blockdev nvme admin passthru ...passed 00:22:41.637 Test: blockdev copy ...passed 00:22:41.637 00:22:41.637 Run Summary: Type Total Ran Passed Failed Inactive 00:22:41.637 suites 1 1 n/a 0 0 00:22:41.637 tests 23 23 23 0 0 00:22:41.637 asserts 130 130 130 0 n/a 00:22:41.637 00:22:41.637 Elapsed time = 0.598 seconds 00:22:41.637 0 00:22:41.637 22:39:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # killprocess 91359 00:22:41.637 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 91359 ']' 00:22:41.637 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 91359 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91359 00:22:41.896 killing process with pid 91359 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91359' 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 91359 00:22:41.896 22:39:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 91359 00:22:44.430 22:39:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@297 -- # trap - SIGINT SIGTERM EXIT 00:22:44.430 00:22:44.430 real 0m4.105s 00:22:44.430 user 0m10.212s 00:22:44.430 sys 0m0.480s 00:22:44.430 22:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.430 ************************************ 00:22:44.430 END TEST bdev_bounds 00:22:44.430 ************************************ 00:22:44.430 22:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:44.430 22:39:39 blockdev_raid5f -- bdev/blockdev.sh@763 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:44.430 22:39:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:44.430 22:39:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.430 22:39:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:44.430 ************************************ 00:22:44.430 START TEST bdev_nbd 00:22:44.430 ************************************ 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # uname -s 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # [[ Linux == Linux ]] 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # bdev_all=('raid5f') 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_all 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@306 -- # local bdev_num=1 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # [[ -e /sys/module/nbd ]] 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # local nbd_all 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # bdev_num=1 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # nbd_list=('/dev/nbd0') 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local nbd_list 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # bdev_list=('raid5f') 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # local bdev_list 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # nbd_pid=91430 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # waitforlisten 91430 /var/tmp/spdk-nbd.sock 00:22:44.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 91430 ']' 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.430 22:39:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:44.430 [2024-09-27 22:39:39.998022] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:22:44.430 [2024-09-27 22:39:39.998333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.430 [2024-09-27 22:39:40.172609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.690 [2024-09-27 22:39:40.401878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:45.629 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:45.887 1+0 records in 00:22:45.887 1+0 records out 00:22:45.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437634 s, 9.4 MB/s 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:45.887 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:46.145 { 00:22:46.145 "nbd_device": "/dev/nbd0", 00:22:46.145 "bdev_name": "raid5f" 00:22:46.145 } 00:22:46.145 ]' 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:46.145 { 00:22:46.145 "nbd_device": "/dev/nbd0", 00:22:46.145 "bdev_name": "raid5f" 00:22:46.145 } 00:22:46.145 ]' 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:46.145 22:39:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:46.402 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:46.660 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:46.660 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.661 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:46.920 /dev/nbd0 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:46.920 1+0 records in 00:22:46.920 1+0 records out 00:22:46.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352043 s, 11.6 MB/s 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:46.920 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:47.180 { 00:22:47.180 "nbd_device": "/dev/nbd0", 00:22:47.180 "bdev_name": "raid5f" 00:22:47.180 } 00:22:47.180 ]' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:47.180 { 00:22:47.180 "nbd_device": "/dev/nbd0", 00:22:47.180 "bdev_name": "raid5f" 00:22:47.180 } 00:22:47.180 ]' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:47.180 256+0 records in 00:22:47.180 256+0 records out 00:22:47.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123485 s, 84.9 MB/s 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:47.180 256+0 records in 00:22:47.180 256+0 records out 00:22:47.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339598 s, 30.9 MB/s 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:47.180 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:47.181 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:47.181 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:47.181 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:47.181 22:39:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:47.440 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:47.699 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:47.958 malloc_lvol_verify 00:22:47.958 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:48.217 55fbf888-98a8-454c-8a4f-d6f97bea2fb9 00:22:48.217 22:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:48.217 8a573a4a-fc9d-4de2-8f5b-72a99183a14e 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:48.478 /dev/nbd0 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:48.478 mke2fs 1.47.0 (5-Feb-2023) 00:22:48.478 Discarding device blocks: 0/4096 done 00:22:48.478 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:48.478 00:22:48.478 Allocating group tables: 0/1 done 00:22:48.478 Writing inode tables: 0/1 done 00:22:48.478 Creating journal (1024 blocks): done 00:22:48.478 Writing superblocks and filesystem accounting information: 0/1 done 00:22:48.478 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:48.478 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # killprocess 91430 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 91430 ']' 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 91430 00:22:48.782 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91430 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:48.783 killing process with pid 91430 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91430' 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 91430 00:22:48.783 22:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 91430 00:22:51.318 22:39:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@328 -- # trap - SIGINT SIGTERM EXIT 00:22:51.318 00:22:51.318 real 0m7.022s 00:22:51.318 user 0m8.852s 00:22:51.318 sys 0m1.498s 00:22:51.318 22:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:51.318 22:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:51.318 ************************************ 00:22:51.318 END TEST bdev_nbd 00:22:51.318 ************************************ 00:22:51.318 22:39:46 blockdev_raid5f -- bdev/blockdev.sh@764 -- # [[ y == y ]] 00:22:51.318 22:39:46 blockdev_raid5f -- bdev/blockdev.sh@765 -- # '[' raid5f = nvme ']' 00:22:51.318 22:39:46 blockdev_raid5f -- bdev/blockdev.sh@765 -- # '[' raid5f = gpt ']' 00:22:51.318 22:39:46 blockdev_raid5f -- bdev/blockdev.sh@769 -- # run_test bdev_fio fio_test_suite '' 00:22:51.318 22:39:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:51.318 22:39:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:51.318 22:39:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:51.318 ************************************ 00:22:51.318 START TEST bdev_fio 00:22:51.318 ************************************ 00:22:51.318 22:39:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:22:51.318 22:39:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@332 -- # local env_context 00:22:51.318 22:39:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:51.318 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:51.318 22:39:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@337 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:51.318 22:39:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # echo '' 00:22:51.318 22:39:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # sed s/--env-context=// 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # env_context= 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # for b in "${bdevs_name[@]}" 00:22:51.318 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo '[job_raid5f]' 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@344 -- # echo filename=raid5f 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:51.319 ************************************ 00:22:51.319 START TEST bdev_fio_rw_verify 00:22:51.319 ************************************ 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:51.319 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:51.578 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:51.579 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:51.579 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:22:51.579 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:51.579 22:39:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:51.579 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:51.579 fio-3.35 00:22:51.579 Starting 1 thread 00:23:03.791 00:23:03.791 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91650: Fri Sep 27 22:39:58 2024 00:23:03.791 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(444MiB/10001msec) 00:23:03.791 slat (nsec): min=18564, max=55230, avg=20747.33, stdev=1981.45 00:23:03.791 clat (usec): min=11, max=329, avg=143.66, stdev=49.07 00:23:03.791 lat (usec): min=31, max=363, avg=164.41, stdev=49.27 00:23:03.791 clat percentiles (usec): 00:23:03.791 | 50.000th=[ 147], 99.000th=[ 233], 99.900th=[ 269], 99.990th=[ 297], 00:23:03.791 | 99.999th=[ 318] 00:23:03.791 write: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(459MiB/9882msec); 0 zone resets 00:23:03.791 slat (usec): min=8, max=130, avg=17.78, stdev= 3.16 00:23:03.791 clat (usec): min=59, max=870, avg=319.64, stdev=40.16 00:23:03.791 lat (usec): min=76, max=1000, avg=337.42, stdev=40.82 00:23:03.791 clat percentiles (usec): 00:23:03.791 | 50.000th=[ 322], 99.000th=[ 396], 99.900th=[ 515], 99.990th=[ 783], 00:23:03.791 | 99.999th=[ 840] 00:23:03.791 bw ( KiB/s): min=42792, max=50512, per=99.05%, avg=47123.16, stdev=2210.63, samples=19 00:23:03.791 iops : min=10698, max=12628, avg=11780.79, stdev=552.66, samples=19 00:23:03.791 lat (usec) : 20=0.01%, 50=0.01%, 100=11.97%, 250=39.67%, 500=48.29% 00:23:03.791 lat (usec) : 750=0.05%, 1000=0.01% 00:23:03.791 cpu : usr=98.91%, sys=0.45%, ctx=17, majf=0, minf=9408 00:23:03.791 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:03.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.791 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.791 issued rwts: total=113676,117538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.791 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:03.791 00:23:03.791 Run status group 0 (all jobs): 00:23:03.791 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=444MiB (466MB), run=10001-10001msec 00:23:03.791 WRITE: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=459MiB (481MB), run=9882-9882msec 00:23:05.169 ----------------------------------------------------- 00:23:05.169 Suppressions used: 00:23:05.169 count bytes template 00:23:05.169 1 7 /usr/src/fio/parse.c 00:23:05.169 331 31776 /usr/src/fio/iolog.c 00:23:05.169 1 8 libtcmalloc_minimal.so 00:23:05.169 1 904 libcrypto.so 00:23:05.169 ----------------------------------------------------- 00:23:05.169 00:23:05.169 00:23:05.169 real 0m13.816s 00:23:05.169 user 0m13.979s 00:23:05.169 sys 0m0.670s 00:23:05.169 ************************************ 00:23:05.169 END TEST bdev_fio_rw_verify 00:23:05.169 ************************************ 00:23:05.169 22:40:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.169 22:40:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@352 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:23:05.169 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:23:05.170 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:23:05.170 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cda61766-5b21-4bea-8156-14c21d0ef954"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cda61766-5b21-4bea-8156-14c21d0ef954",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cda61766-5b21-4bea-8156-14c21d0ef954",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1179b394-9177-4b46-8c34-65408bebae87",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7f0f980e-248a-4342-aeae-b0741fc7874e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ee57fde1-eb83-498d-ad94-84cb848e0b99",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:05.170 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@356 -- # [[ -n '' ]] 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # popd 00:23:05.466 /home/vagrant/spdk_repo/spdk 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # trap - SIGINT SIGTERM EXIT 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@365 -- # return 0 00:23:05.466 00:23:05.466 real 0m14.097s 00:23:05.466 user 0m14.096s 00:23:05.466 sys 0m0.802s 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.466 22:40:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:05.466 ************************************ 00:23:05.466 END TEST bdev_fio 00:23:05.466 ************************************ 00:23:05.466 22:40:01 blockdev_raid5f -- bdev/blockdev.sh@776 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:05.466 22:40:01 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:05.466 22:40:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:23:05.466 22:40:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.466 22:40:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:05.466 ************************************ 00:23:05.466 START TEST bdev_verify 00:23:05.466 ************************************ 00:23:05.466 22:40:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:05.466 [2024-09-27 22:40:01.251711] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:23:05.466 [2024-09-27 22:40:01.251839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91818 ] 00:23:05.725 [2024-09-27 22:40:01.423339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:05.984 [2024-09-27 22:40:01.639769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.984 [2024-09-27 22:40:01.639812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.919 Running I/O for 5 seconds... 00:23:12.047 14008.00 IOPS, 54.72 MiB/s 15479.00 IOPS, 60.46 MiB/s 16190.00 IOPS, 63.24 MiB/s 16382.00 IOPS, 63.99 MiB/s 16427.80 IOPS, 64.17 MiB/s 00:23:12.047 Latency(us) 00:23:12.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.047 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:12.047 Verification LBA range: start 0x0 length 0x2000 00:23:12.047 raid5f : 5.02 8219.41 32.11 0.00 0.00 23373.33 101.58 18739.61 00:23:12.047 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:12.047 Verification LBA range: start 0x2000 length 0x2000 00:23:12.047 raid5f : 5.02 8202.64 32.04 0.00 0.00 23395.64 246.75 18739.61 00:23:12.047 =================================================================================================================== 00:23:12.047 Total : 16422.05 64.15 0.00 0.00 23384.47 101.58 18739.61 00:23:14.584 00:23:14.584 real 0m8.680s 00:23:14.584 user 0m15.909s 00:23:14.584 sys 0m0.335s 00:23:14.584 22:40:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:14.584 22:40:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:14.584 ************************************ 00:23:14.584 END TEST bdev_verify 00:23:14.584 ************************************ 00:23:14.584 22:40:09 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:14.584 22:40:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:23:14.584 22:40:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:14.584 22:40:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:14.584 ************************************ 00:23:14.584 START TEST bdev_verify_big_io 00:23:14.584 ************************************ 00:23:14.584 22:40:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:14.584 [2024-09-27 22:40:10.008039] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:23:14.584 [2024-09-27 22:40:10.008196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91927 ] 00:23:14.584 [2024-09-27 22:40:10.179678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:14.584 [2024-09-27 22:40:10.403301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.584 [2024-09-27 22:40:10.403341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.520 Running I/O for 5 seconds... 00:23:20.682 758.00 IOPS, 47.38 MiB/s 854.50 IOPS, 53.41 MiB/s 846.00 IOPS, 52.88 MiB/s 888.00 IOPS, 55.50 MiB/s 875.60 IOPS, 54.73 MiB/s 00:23:20.682 Latency(us) 00:23:20.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.682 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:20.682 Verification LBA range: start 0x0 length 0x200 00:23:20.682 raid5f : 5.19 440.49 27.53 0.00 0.00 7236558.21 156.27 316678.37 00:23:20.682 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:20.682 Verification LBA range: start 0x200 length 0x200 00:23:20.682 raid5f : 5.18 441.27 27.58 0.00 0.00 7192515.35 281.29 314993.91 00:23:20.682 =================================================================================================================== 00:23:20.682 Total : 881.76 55.11 0.00 0.00 7214536.78 156.27 316678.37 00:23:23.219 00:23:23.219 real 0m8.866s 00:23:23.219 user 0m16.256s 00:23:23.219 sys 0m0.352s 00:23:23.219 22:40:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.219 ************************************ 00:23:23.219 END TEST bdev_verify_big_io 00:23:23.219 ************************************ 00:23:23.219 22:40:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:23.219 22:40:18 blockdev_raid5f -- bdev/blockdev.sh@780 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:23.219 22:40:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:23:23.219 22:40:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.219 22:40:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:23.219 ************************************ 00:23:23.219 START TEST bdev_write_zeroes 00:23:23.219 ************************************ 00:23:23.219 22:40:18 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:23.219 [2024-09-27 22:40:18.951108] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:23:23.219 [2024-09-27 22:40:18.951224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92041 ] 00:23:23.479 [2024-09-27 22:40:19.121785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.479 [2024-09-27 22:40:19.350775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.458 Running I/O for 1 seconds... 00:23:25.396 27183.00 IOPS, 106.18 MiB/s 00:23:25.396 Latency(us) 00:23:25.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.396 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:25.396 raid5f : 1.01 27155.96 106.08 0.00 0.00 4698.10 1467.32 6606.24 00:23:25.396 =================================================================================================================== 00:23:25.396 Total : 27155.96 106.08 0.00 0.00 4698.10 1467.32 6606.24 00:23:27.932 00:23:27.932 real 0m4.621s 00:23:27.932 user 0m4.155s 00:23:27.932 sys 0m0.334s 00:23:27.932 22:40:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:27.932 22:40:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:27.932 ************************************ 00:23:27.932 END TEST bdev_write_zeroes 00:23:27.932 ************************************ 00:23:27.932 22:40:23 blockdev_raid5f -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:27.932 22:40:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:23:27.932 22:40:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.932 22:40:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:27.932 ************************************ 00:23:27.932 START TEST bdev_json_nonenclosed 00:23:27.932 ************************************ 00:23:27.932 22:40:23 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:27.932 [2024-09-27 22:40:23.649088] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:23:27.932 [2024-09-27 22:40:23.649344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92105 ] 00:23:28.190 [2024-09-27 22:40:23.824220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.449 [2024-09-27 22:40:24.068408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.449 [2024-09-27 22:40:24.068509] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:28.449 [2024-09-27 22:40:24.068540] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:28.449 [2024-09-27 22:40:24.068554] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:28.707 ************************************ 00:23:28.707 END TEST bdev_json_nonenclosed 00:23:28.707 ************************************ 00:23:28.707 00:23:28.707 real 0m0.936s 00:23:28.707 user 0m0.678s 00:23:28.707 sys 0m0.151s 00:23:28.707 22:40:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.707 22:40:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:28.707 22:40:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:28.707 22:40:24 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:23:28.707 22:40:24 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.707 22:40:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:28.707 ************************************ 00:23:28.707 START TEST bdev_json_nonarray 00:23:28.707 ************************************ 00:23:28.707 22:40:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:28.965 [2024-09-27 22:40:24.653636] Starting SPDK v25.01-pre git sha1 a2e043c42 / DPDK 24.03.0 initialization... 00:23:28.965 [2024-09-27 22:40:24.653751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92132 ] 00:23:28.965 [2024-09-27 22:40:24.822784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.223 [2024-09-27 22:40:25.048906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.223 [2024-09-27 22:40:25.049036] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:29.224 [2024-09-27 22:40:25.049071] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:29.224 [2024-09-27 22:40:25.049084] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:29.789 00:23:29.789 real 0m0.914s 00:23:29.789 user 0m0.666s 00:23:29.789 sys 0m0.142s 00:23:29.789 ************************************ 00:23:29.789 END TEST bdev_json_nonarray 00:23:29.789 ************************************ 00:23:29.789 22:40:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.789 22:40:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@788 -- # [[ raid5f == bdev ]] 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@795 -- # [[ raid5f == gpt ]] 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@799 -- # [[ raid5f == crypto_sw ]] 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@811 -- # trap - SIGINT SIGTERM EXIT 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@812 -- # cleanup 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:23:29.789 22:40:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:23:29.789 00:23:29.789 real 0m59.536s 00:23:29.789 user 1m19.938s 00:23:29.789 sys 0m5.672s 00:23:29.789 22:40:25 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.789 22:40:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.789 ************************************ 00:23:29.789 END TEST blockdev_raid5f 00:23:29.789 ************************************ 00:23:29.789 22:40:25 -- spdk/autotest.sh@194 -- # uname -s 00:23:29.789 22:40:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:29.789 22:40:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:29.789 22:40:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:29.789 22:40:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:29.789 22:40:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:23:29.789 22:40:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:23:29.789 22:40:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.789 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:23:30.047 22:40:25 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:30.047 22:40:25 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:23:30.047 22:40:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:30.047 22:40:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:30.047 22:40:25 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:23:30.047 22:40:25 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:23:30.047 22:40:25 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:23:30.047 22:40:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.047 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:23:30.047 22:40:25 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:23:30.047 22:40:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:30.047 22:40:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:30.047 22:40:25 -- common/autotest_common.sh@10 -- # set +x 00:23:31.975 INFO: APP EXITING 00:23:31.975 INFO: killing all VMs 00:23:31.975 INFO: killing vhost app 00:23:31.975 INFO: EXIT DONE 00:23:32.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:32.545 Waiting for block devices as requested 00:23:32.804 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.804 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:33.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:33.742 Cleaning 00:23:33.742 Removing: /var/run/dpdk/spdk0/config 00:23:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:33.742 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:33.742 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:33.742 Removing: /dev/shm/spdk_tgt_trace.pid56691 00:23:33.742 Removing: /var/run/dpdk/spdk0 00:23:33.742 Removing: /var/run/dpdk/spdk_pid56445 00:23:33.742 Removing: /var/run/dpdk/spdk_pid56691 00:23:33.742 Removing: /var/run/dpdk/spdk_pid56938 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57053 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57120 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57259 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57288 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57509 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57651 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57769 00:23:33.742 Removing: /var/run/dpdk/spdk_pid57908 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58031 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58072 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58114 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58190 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58318 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58789 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58887 00:23:33.742 Removing: /var/run/dpdk/spdk_pid58985 00:23:33.742 Removing: /var/run/dpdk/spdk_pid59022 00:23:33.742 Removing: /var/run/dpdk/spdk_pid59214 00:23:34.001 Removing: /var/run/dpdk/spdk_pid59241 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59434 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59457 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59543 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59573 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59647 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59666 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59893 00:23:34.002 Removing: /var/run/dpdk/spdk_pid59931 00:23:34.002 Removing: /var/run/dpdk/spdk_pid60020 00:23:34.002 Removing: /var/run/dpdk/spdk_pid61500 00:23:34.002 Removing: /var/run/dpdk/spdk_pid61717 00:23:34.002 Removing: /var/run/dpdk/spdk_pid61868 00:23:34.002 Removing: /var/run/dpdk/spdk_pid62561 00:23:34.002 Removing: /var/run/dpdk/spdk_pid62778 00:23:34.002 Removing: /var/run/dpdk/spdk_pid62939 00:23:34.002 Removing: /var/run/dpdk/spdk_pid63634 00:23:34.002 Removing: /var/run/dpdk/spdk_pid63975 00:23:34.002 Removing: /var/run/dpdk/spdk_pid64132 00:23:34.002 Removing: /var/run/dpdk/spdk_pid65561 00:23:34.002 Removing: /var/run/dpdk/spdk_pid65825 00:23:34.002 Removing: /var/run/dpdk/spdk_pid65982 00:23:34.002 Removing: /var/run/dpdk/spdk_pid67400 00:23:34.002 Removing: /var/run/dpdk/spdk_pid67664 00:23:34.002 Removing: /var/run/dpdk/spdk_pid67821 00:23:34.002 Removing: /var/run/dpdk/spdk_pid69253 00:23:34.002 Removing: /var/run/dpdk/spdk_pid69715 00:23:34.002 Removing: /var/run/dpdk/spdk_pid69872 00:23:34.002 Removing: /var/run/dpdk/spdk_pid71406 00:23:34.002 Removing: /var/run/dpdk/spdk_pid71683 00:23:34.002 Removing: /var/run/dpdk/spdk_pid71840 00:23:34.002 Removing: /var/run/dpdk/spdk_pid73375 00:23:34.002 Removing: /var/run/dpdk/spdk_pid73651 00:23:34.002 Removing: /var/run/dpdk/spdk_pid73813 00:23:34.002 Removing: /var/run/dpdk/spdk_pid75343 00:23:34.002 Removing: /var/run/dpdk/spdk_pid75847 00:23:34.002 Removing: /var/run/dpdk/spdk_pid76004 00:23:34.002 Removing: /var/run/dpdk/spdk_pid76164 00:23:34.002 Removing: /var/run/dpdk/spdk_pid76601 00:23:34.002 Removing: /var/run/dpdk/spdk_pid77354 00:23:34.002 Removing: /var/run/dpdk/spdk_pid77752 00:23:34.002 Removing: /var/run/dpdk/spdk_pid78471 00:23:34.002 Removing: /var/run/dpdk/spdk_pid78951 00:23:34.002 Removing: /var/run/dpdk/spdk_pid79757 00:23:34.002 Removing: /var/run/dpdk/spdk_pid80185 00:23:34.002 Removing: /var/run/dpdk/spdk_pid82210 00:23:34.002 Removing: /var/run/dpdk/spdk_pid82666 00:23:34.002 Removing: /var/run/dpdk/spdk_pid83122 00:23:34.002 Removing: /var/run/dpdk/spdk_pid85235 00:23:34.002 Removing: /var/run/dpdk/spdk_pid85731 00:23:34.002 Removing: /var/run/dpdk/spdk_pid86259 00:23:34.002 Removing: /var/run/dpdk/spdk_pid87338 00:23:34.002 Removing: /var/run/dpdk/spdk_pid87672 00:23:34.002 Removing: /var/run/dpdk/spdk_pid88627 00:23:34.002 Removing: /var/run/dpdk/spdk_pid88957 00:23:34.002 Removing: /var/run/dpdk/spdk_pid89910 00:23:34.002 Removing: /var/run/dpdk/spdk_pid90238 00:23:34.002 Removing: /var/run/dpdk/spdk_pid90929 00:23:34.002 Removing: /var/run/dpdk/spdk_pid91222 00:23:34.261 Removing: /var/run/dpdk/spdk_pid91299 00:23:34.261 Removing: /var/run/dpdk/spdk_pid91359 00:23:34.261 Removing: /var/run/dpdk/spdk_pid91632 00:23:34.261 Removing: /var/run/dpdk/spdk_pid91818 00:23:34.261 Removing: /var/run/dpdk/spdk_pid91927 00:23:34.261 Removing: /var/run/dpdk/spdk_pid92041 00:23:34.261 Removing: /var/run/dpdk/spdk_pid92105 00:23:34.261 Removing: /var/run/dpdk/spdk_pid92132 00:23:34.261 Clean 00:23:34.261 22:40:29 -- common/autotest_common.sh@1451 -- # return 0 00:23:34.261 22:40:29 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:23:34.261 22:40:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.261 22:40:29 -- common/autotest_common.sh@10 -- # set +x 00:23:34.261 22:40:30 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:23:34.261 22:40:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.261 22:40:30 -- common/autotest_common.sh@10 -- # set +x 00:23:34.261 22:40:30 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:34.261 22:40:30 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:34.261 22:40:30 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:34.261 22:40:30 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:23:34.261 22:40:30 -- spdk/autotest.sh@394 -- # hostname 00:23:34.262 22:40:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:34.521 geninfo: WARNING: invalid characters removed from testname! 00:24:01.069 22:40:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:01.069 22:40:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:01.639 22:40:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:03.546 22:40:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:05.450 22:41:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:07.981 22:41:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:09.886 22:41:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:09.886 22:41:05 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:24:09.886 22:41:05 -- common/autotest_common.sh@1681 -- $ lcov --version 00:24:09.886 22:41:05 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:24:09.886 22:41:05 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:24:09.886 22:41:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:24:09.886 22:41:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:24:09.886 22:41:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:24:09.886 22:41:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:24:09.886 22:41:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:24:09.886 22:41:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:24:09.886 22:41:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:24:09.886 22:41:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:24:09.886 22:41:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:24:09.886 22:41:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:24:09.886 22:41:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:24:09.886 22:41:05 -- scripts/common.sh@344 -- $ case "$op" in 00:24:09.886 22:41:05 -- scripts/common.sh@345 -- $ : 1 00:24:09.886 22:41:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:24:09.886 22:41:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.886 22:41:05 -- scripts/common.sh@365 -- $ decimal 1 00:24:09.886 22:41:05 -- scripts/common.sh@353 -- $ local d=1 00:24:09.886 22:41:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:24:09.886 22:41:05 -- scripts/common.sh@355 -- $ echo 1 00:24:09.887 22:41:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:24:09.887 22:41:05 -- scripts/common.sh@366 -- $ decimal 2 00:24:09.887 22:41:05 -- scripts/common.sh@353 -- $ local d=2 00:24:09.887 22:41:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:24:09.887 22:41:05 -- scripts/common.sh@355 -- $ echo 2 00:24:09.887 22:41:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:24:09.887 22:41:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:24:09.887 22:41:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:24:09.887 22:41:05 -- scripts/common.sh@368 -- $ return 0 00:24:09.887 22:41:05 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.887 22:41:05 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:24:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.887 --rc genhtml_branch_coverage=1 00:24:09.887 --rc genhtml_function_coverage=1 00:24:09.887 --rc genhtml_legend=1 00:24:09.887 --rc geninfo_all_blocks=1 00:24:09.887 --rc geninfo_unexecuted_blocks=1 00:24:09.887 00:24:09.887 ' 00:24:09.887 22:41:05 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:24:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.887 --rc genhtml_branch_coverage=1 00:24:09.887 --rc genhtml_function_coverage=1 00:24:09.887 --rc genhtml_legend=1 00:24:09.887 --rc geninfo_all_blocks=1 00:24:09.887 --rc geninfo_unexecuted_blocks=1 00:24:09.887 00:24:09.887 ' 00:24:09.887 22:41:05 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:24:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.887 --rc genhtml_branch_coverage=1 00:24:09.887 --rc genhtml_function_coverage=1 00:24:09.887 --rc genhtml_legend=1 00:24:09.887 --rc geninfo_all_blocks=1 00:24:09.887 --rc geninfo_unexecuted_blocks=1 00:24:09.887 00:24:09.887 ' 00:24:09.887 22:41:05 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:24:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.887 --rc genhtml_branch_coverage=1 00:24:09.887 --rc genhtml_function_coverage=1 00:24:09.887 --rc genhtml_legend=1 00:24:09.887 --rc geninfo_all_blocks=1 00:24:09.887 --rc geninfo_unexecuted_blocks=1 00:24:09.887 00:24:09.887 ' 00:24:09.887 22:41:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.887 22:41:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:24:09.887 22:41:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:09.887 22:41:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.887 22:41:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.887 22:41:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.887 22:41:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.887 22:41:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.887 22:41:05 -- paths/export.sh@5 -- $ export PATH 00:24:09.887 22:41:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.887 22:41:05 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:09.887 22:41:05 -- common/autobuild_common.sh@479 -- $ date +%s 00:24:09.887 22:41:05 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727476865.XXXXXX 00:24:09.887 22:41:05 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727476865.IILDu5 00:24:09.887 22:41:05 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:24:09.887 22:41:05 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:24:09.887 22:41:05 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:09.887 22:41:05 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:09.887 22:41:05 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:09.887 22:41:05 -- common/autobuild_common.sh@495 -- $ get_config_params 00:24:09.887 22:41:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:24:09.887 22:41:05 -- common/autotest_common.sh@10 -- $ set +x 00:24:09.887 22:41:05 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:24:09.887 22:41:05 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:24:09.887 22:41:05 -- pm/common@17 -- $ local monitor 00:24:09.887 22:41:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:09.887 22:41:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:09.887 22:41:05 -- pm/common@25 -- $ sleep 1 00:24:09.887 22:41:05 -- pm/common@21 -- $ date +%s 00:24:09.887 22:41:05 -- pm/common@21 -- $ date +%s 00:24:09.887 22:41:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727476865 00:24:09.887 22:41:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727476865 00:24:09.887 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727476865_collect-cpu-load.pm.log 00:24:09.887 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727476865_collect-vmstat.pm.log 00:24:10.823 22:41:06 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:24:10.823 22:41:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:24:10.823 22:41:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:24:10.823 22:41:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:10.823 22:41:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:10.823 22:41:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:10.823 22:41:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:10.823 22:41:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:10.823 22:41:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:10.823 22:41:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:10.823 22:41:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:10.823 22:41:06 -- pm/common@44 -- $ pid=93639 00:24:10.823 22:41:06 -- pm/common@50 -- $ kill -TERM 93639 00:24:10.823 22:41:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:10.823 22:41:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:10.823 22:41:06 -- pm/common@44 -- $ pid=93641 00:24:10.823 22:41:06 -- pm/common@50 -- $ kill -TERM 93641 00:24:10.823 + [[ -n 5210 ]] 00:24:10.823 + sudo kill 5210 00:24:11.091 [Pipeline] } 00:24:11.109 [Pipeline] // timeout 00:24:11.115 [Pipeline] } 00:24:11.131 [Pipeline] // stage 00:24:11.137 [Pipeline] } 00:24:11.152 [Pipeline] // catchError 00:24:11.162 [Pipeline] stage 00:24:11.164 [Pipeline] { (Stop VM) 00:24:11.176 [Pipeline] sh 00:24:11.455 + vagrant halt 00:24:14.770 ==> default: Halting domain... 00:24:21.351 [Pipeline] sh 00:24:21.648 + vagrant destroy -f 00:24:24.938 ==> default: Removing domain... 00:24:24.951 [Pipeline] sh 00:24:25.233 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:24:25.241 [Pipeline] } 00:24:25.258 [Pipeline] // stage 00:24:25.265 [Pipeline] } 00:24:25.280 [Pipeline] // dir 00:24:25.286 [Pipeline] } 00:24:25.299 [Pipeline] // wrap 00:24:25.305 [Pipeline] } 00:24:25.319 [Pipeline] // catchError 00:24:25.328 [Pipeline] stage 00:24:25.331 [Pipeline] { (Epilogue) 00:24:25.344 [Pipeline] sh 00:24:25.627 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:30.915 [Pipeline] catchError 00:24:30.916 [Pipeline] { 00:24:30.929 [Pipeline] sh 00:24:31.212 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:31.212 Artifacts sizes are good 00:24:31.222 [Pipeline] } 00:24:31.236 [Pipeline] // catchError 00:24:31.248 [Pipeline] archiveArtifacts 00:24:31.255 Archiving artifacts 00:24:31.369 [Pipeline] cleanWs 00:24:31.381 [WS-CLEANUP] Deleting project workspace... 00:24:31.381 [WS-CLEANUP] Deferred wipeout is used... 00:24:31.387 [WS-CLEANUP] done 00:24:31.390 [Pipeline] } 00:24:31.406 [Pipeline] // stage 00:24:31.411 [Pipeline] } 00:24:31.424 [Pipeline] // node 00:24:31.429 [Pipeline] End of Pipeline 00:24:31.475 Finished: SUCCESS